WorldWideScience

Sample records for model selection based

  1. Selection of probability based weighting models for Boolean retrieval system

    Energy Technology Data Exchange (ETDEWEB)

    Ebinuma, Y. (Japan Atomic Energy Research Inst., Tokai, Ibaraki. Tokai Research Establishment)

    1981-09-01

    Automatic weighting models based on probability theory were studied if they can be applied to boolean search logics including logical sum. The INIS detabase was used for searching of one particular search formula. Among sixteen models three with good ranking performance were selected. These three models were further applied to searching of nine search formulas in the same database. It was found that two models among them show slightly better average ranking performance while the other model, the simplest one, seems also practical.

  2. Agent-Based vs. Equation-based Epidemiological Models:A Model Selection Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Sukumar, Sreenivas R [ORNL; Nutaro, James J [ORNL

    2012-01-01

    This paper is motivated by the need to design model validation strategies for epidemiological disease-spread models. We consider both agent-based and equation-based models of pandemic disease spread and study the nuances and complexities one has to consider from the perspective of model validation. For this purpose, we instantiate an equation based model and an agent based model of the 1918 Spanish flu and we leverage data published in the literature for our case- study. We present our observations from the perspective of each implementation and discuss the application of model-selection criteria to compare the risk in choosing one modeling paradigm to another. We conclude with a discussion of our experience and document future ideas for a model validation framework.

  3. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  4. A model-based approach to selection of tag SNPs

    Directory of Open Access Journals (Sweden)

    Sun Fengzhu

    2006-06-01

    Full Text Available Abstract Background Single Nucleotide Polymorphisms (SNPs are the most common type of polymorphisms found in the human genome. Effective genetic association studies require the identification of sets of tag SNPs that capture as much haplotype information as possible. Tag SNP selection is analogous to the problem of data compression in information theory. According to Shannon's framework, the optimal tag set maximizes the entropy of the tag SNPs subject to constraints on the number of SNPs. This approach requires an appropriate probabilistic model. Compared to simple measures of Linkage Disequilibrium (LD, a good model of haplotype sequences can more accurately account for LD structure. It also provides a machinery for the prediction of tagged SNPs and thereby to assess the performances of tag sets through their ability to predict larger SNP sets. Results Here, we compute the description code-lengths of SNP data for an array of models and we develop tag SNP selection methods based on these models and the strategy of entropy maximization. Using data sets from the HapMap and ENCODE projects, we show that the hidden Markov model introduced by Li and Stephens outperforms the other models in several aspects: description code-length of SNP data, information content of tag sets, and prediction of tagged SNPs. This is the first use of this model in the context of tag SNP selection. Conclusion Our study provides strong evidence that the tag sets selected by our best method, based on Li and Stephens model, outperform those chosen by several existing methods. The results also suggest that information content evaluated with a good model is more sensitive for assessing the quality of a tagging set than the correct prediction rate of tagged SNPs. Besides, we show that haplotype phase uncertainty has an almost negligible impact on the ability of good tag sets to predict tagged SNPs. This justifies the selection of tag SNPs on the basis of haplotype

  5. Auditory-model based robust feature selection for speech recognition.

    Science.gov (United States)

    Koniaris, Christos; Kuropatwinski, Marcin; Kleijn, W Bastiaan

    2010-02-01

    It is shown that robust dimension-reduction of a feature set for speech recognition can be based on a model of the human auditory system. Whereas conventional methods optimize classification performance, the proposed method exploits knowledge implicit in the auditory periphery, inheriting its robustness. Features are selected to maximize the similarity of the Euclidean geometry of the feature domain and the perceptual domain. Recognition experiments using mel-frequency cepstral coefficients (MFCCs) confirm the effectiveness of the approach, which does not require labeled training data. For noisy data the method outperforms commonly used discriminant-analysis based dimension-reduction methods that rely on labeling. The results indicate that selecting MFCCs in their natural order results in subsets with good performance.

  6. Bayesian Model Selection with Network Based Diffusion Analysis.

    Science.gov (United States)

    Whalen, Andrew; Hoppitt, William J E

    2016-01-01

    A number of recent studies have used Network Based Diffusion Analysis (NBDA) to detect the role of social transmission in the spread of a novel behavior through a population. In this paper we present a unified framework for performing NBDA in a Bayesian setting, and demonstrate how the Watanabe Akaike Information Criteria (WAIC) can be used for model selection. We present a specific example of applying this method to Time to Acquisition Diffusion Analysis (TADA). To examine the robustness of this technique, we performed a large scale simulation study and found that NBDA using WAIC could recover the correct model of social transmission under a wide range of cases, including under the presence of random effects, individual level variables, and alternative models of social transmission. This work suggests that NBDA is an effective and widely applicable tool for uncovering whether social transmission underpins the spread of a novel behavior, and may still provide accurate results even when key model assumptions are relaxed.

  7. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  8. Bayesian Model Selection With Network Based Diffusion Analysis

    Directory of Open Access Journals (Sweden)

    Andrew eWhalen

    2016-04-01

    Full Text Available A number of recent studies have used Network Based Diffusion Analysis (NBDA to detect the role of social transmission in the spread of a novel behavior through a population. In this paper we present a unified framework for performing NBDA in a Bayesian setting, and demonstrate how the Watanabe Akaike Information Criteria (WAIC can be used for model selection. We present a specific example of applying this method to Time to Acquisition Diffusion Analysis (TADA. To examine the robustness of this technique, we performed a large scale simulation study and found that NBDA using WAIC could recover the correct model of social transmission under a wide range of cases, including under the presence of random effects, individual level variables, and alternative models of social transmission. This work suggests that NBDA is an effective and widely applicable tool for uncovering whether social transmission underpins the spread of a novel behavior, and may still provide accurate results even when key model assumptions are relaxed.

  9. Sensor Optimization Selection Model Based on Testability Constraint

    Institute of Scientific and Technical Information of China (English)

    YANG Shuming; QIU Jing; LIU Guanjun

    2012-01-01

    Sensor selection and optimization is one of the important parts in design for testability.To address the problems that the traditional sensor optimization selection model does not take the requirements of prognostics and health management especially fault prognostics for testability into account and does not consider the impacts of sensor actual attributes on fault detectability,a novel sensor optimization selection model is proposed.Firstly,a universal architecture for sensor selection and optimization is provided.Secondly,a new testability index named fault predictable rate is defined to describe fault prognostics requirements for testability.Thirdly,a sensor selection and optimization model for prognostics and health management is constructed,which takes sensor cost as objective finction and the defined testability indexes as constraint conditions.Due to NP-hard property of the model,a generic algorithm is designed to obtain the optimal solution.At last,a case study is presented to demonstrate the sensor selection approach for a stable tracking servo platform.The application results and comparison analysis show the proposed model and algorithm are effective and feasible.This approach can be used to select sensors for prognostics and health management of any system.

  10. NVC Based Model for Selecting Effective Requirement Elicitation Technique

    Directory of Open Access Journals (Sweden)

    Md. Rizwan Beg

    2012-10-01

    Full Text Available Requirement Engineering process starts from gathering of requirements i.e.; requirements elicitation. Requirementselicitation (RE is the base building block for a software project and has very high impact onsubsequent design and builds phases as well. Accurately capturing system requirements is the major factorin the failure of most of software projects. Due to the criticality and impact of this phase, it is very importantto perform the requirements elicitation in no less than a perfect manner. One of the most difficult jobsfor elicitor is to select appropriate technique for eliciting the requirement. Interviewing and Interactingstakeholder during Elicitation process is a communication intensive activity involves Verbal and Nonverbalcommunication (NVC. Elicitor should give emphasis to Non-verbal communication along with verbalcommunication so that requirements recorded more efficiently and effectively. In this paper we proposea model in which stakeholders are classified by observing non-verbal communication and use it as a basefor elicitation technique selection. We also propose an efficient plan for requirements elicitation which intendsto overcome on the constraints, faced by elicitor.

  11. Novel web service selection model based on discrete group search.

    Science.gov (United States)

    Zhai, Jie; Shao, Zhiqing; Guo, Yi; Zhang, Haiteng

    2014-01-01

    In our earlier work, we present a novel formal method for the semiautomatic verification of specifications and for describing web service composition components by using abstract concepts. After verification, the instantiations of components were selected to satisfy the complex service performance constraints. However, selecting an optimal instantiation, which comprises different candidate services for each generic service, from a large number of instantiations is difficult. Therefore, we present a new evolutionary approach on the basis of the discrete group search service (D-GSS) model. With regard to obtaining the optimal multiconstraint instantiation of the complex component, the D-GSS model has competitive performance compared with other service selection models in terms of accuracy, efficiency, and ability to solve high-dimensional service composition component problems. We propose the cost function and the discrete group search optimizer (D-GSO) algorithm and study the convergence of the D-GSS model through verification and test cases.

  12. Rank-based model selection for multiple ions quantum tomography

    Science.gov (United States)

    Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian

    2012-10-01

    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements.

  13. Model Selection Framework for Graph-based data

    CERN Document Server

    Caceres, Rajmonda S; Schmidt, Matthew C; Miller, Benjamin A; Campbell, William M

    2016-01-01

    Graphs are powerful abstractions for capturing complex relationships in diverse application settings. An active area of research focuses on theoretical models that define the generative mechanism of a graph. Yet given the complexity and inherent noise in real datasets, it is still very challenging to identify the best model for a given observed graph. We discuss a framework for graph model selection that leverages a long list of graph topological properties and a random forest classifier to learn and classify different graph instances. We fully characterize the discriminative power of our approach as we sweep through the parameter space of two generative models, the Erdos-Renyi and the stochastic block model. We show that our approach gets very close to known theoretical bounds and we provide insight on which topological features play a critical discriminating role.

  14. Multilevel selection in a resource-based model

    Science.gov (United States)

    Ferreira, Fernando Fagundes; Campos, Paulo R. A.

    2013-07-01

    In the present work we investigate the emergence of cooperation in a multilevel selection model that assumes limiting resources. Following the work by R. J. Requejo and J. Camacho [Phys. Rev. Lett.0031-900710.1103/PhysRevLett.108.038701 108, 038701 (2012)], the interaction among individuals is initially ruled by a prisoner's dilemma (PD) game. The payoff matrix may change, influenced by the resource availability, and hence may also evolve to a non-PD game. Furthermore, one assumes that the population is divided into groups, whose local dynamics is driven by the payoff matrix, whereas an intergroup competition results from the nonuniformity of the growth rate of groups. We study the probability that a single cooperator can invade and establish in a population initially dominated by defectors. Cooperation is strongly favored when group sizes are small. We observe the existence of a critical group size beyond which cooperation becomes counterselected. Although the critical size depends on the parameters of the model, it is seen that a saturation value for the critical group size is achieved. The results conform to the thought that the evolutionary history of life repeatedly involved transitions from smaller selective units to larger selective units.

  15. POSSIBILISTIC SHARPE RATIO BASED NOVICE PORTFOLIO SELECTION MODELS

    Directory of Open Access Journals (Sweden)

    Rupak Bhattacharyya

    2013-02-01

    Full Text Available This paper uses the concept of possibilistic risk aversion to propose a new approach for portfolio selection in fuzzy environment. Using possibility theory, the possibilistic mean, variance, standard deviation and risk premium of a fuzzy number are established. Possibilistic Sharpe ratio is defined as the ratio of possibilistic risk premium and possibilistic standard deviation of a portfolio. The Sharpe ratio is a measure of the performance of the portfolio compared to the risk taken. The higher the Sharpe ratio, the better the performance of the portfolio is and the greater the profits of taking risk. New models of fuzzy portfolio selection considering the possibilistic Sharpe ratio, return and skewness of the portfolio are considered. The feasibility and effectiveness of the proposed method is illustrated by numerical example extracted from Bombay Stock Exchange (BSE, India and is solved by multiple objective genetic algorithm (MOGA.

  16. Neural Network Model Based Cluster Head Selection for Power Control

    Directory of Open Access Journals (Sweden)

    Krishan Kumar

    2011-01-01

    Full Text Available Mobile ad-hoc network has challenge of the limited power to prolong the lifetime of the network, because power is a valuable resource in mobile ad-hoc network. The status of power consumption should be continuously monitored after network deployment. In this paper, we propose coverage aware neural network based power control routing with the objective of maximizing the network lifetime. Cluster head selection is proposed using adaptive learning in neural networks followed by coverage. The simulation results show that the proposed scheme can be used in wide area of applications in mobile ad-hoc network.

  17. Physics-based statistical learning approach to mesoscopic model selection

    Science.gov (United States)

    Taverniers, Søren; Haut, Terry S.; Barros, Kipton; Alexander, Francis J.; Lookman, Turab

    2015-11-01

    In materials science and many other research areas, models are frequently inferred without considering their generalization to unseen data. We apply statistical learning using cross-validation to obtain an optimally predictive coarse-grained description of a two-dimensional kinetic nearest-neighbor Ising model with Glauber dynamics (GD) based on the stochastic Ginzburg-Landau equation (sGLE). The latter is learned from GD "training" data using a log-likelihood analysis, and its predictive ability for various complexities of the model is tested on GD "test" data independent of the data used to train the model on. Using two different error metrics, we perform a detailed analysis of the error between magnetization time trajectories simulated using the learned sGLE coarse-grained description and those obtained using the GD model. We show that both for equilibrium and out-of-equilibrium GD training trajectories, the standard phenomenological description using a quartic free energy does not always yield the most predictive coarse-grained model. Moreover, increasing the amount of training data can shift the optimal model complexity to higher values. Our results are promising in that they pave the way for the use of statistical learning as a general tool for materials modeling and discovery.

  18. Patch-based generative shape model and MDL model selection for statistical analysis of archipelagos

    DEFF Research Database (Denmark)

    Ganz, Melanie; Nielsen, Mads; Brandt, Sami

    2010-01-01

    We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning...... a patch-based dictionary for possible shapes, (2) building up a time-homogeneous Markov model to model the neighbourhood correlations between the patches, and (3) automatic selection of the model complexity by the minimum description length principle. The generative shape model is proposed...... as a probability distribution of a binary image where the model is intended to facilitate sequential simulation. Our results show that a relatively simple model is able to generate structures visually similar to calcifications. Furthermore, we used the shape model as a shape prior in the statistical segmentation...

  19. Performance Measurement Model for the Supplier Selection Based on AHP

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2015-10-01

    Full Text Available The performance of the supplier is a crucial factor for the success or failure of any company. Rational and effective decision making in terms of the supplier selection process can help the organization to optimize cost and quality functions. The nature of supplier selection processes is generally complex, especially when the company has a large variety of products and vendors. Over the years, several solutions and methods have emerged for addressing the supplier selection problem (SSP. Experience and studies have shown that there is no best way for evaluating and selecting a specific supplier process, but that it varies from one organization to another. The aim of this research is to demonstrate how a multiple attribute decision making approach can be effectively applied for the supplier selection process.

  20. Estimation and Model Selection for Model-Based Clustering with the Conditional Classification Likelihood

    CERN Document Server

    Baudry, Jean-Patrick

    2012-01-01

    The Integrated Completed Likelihood (ICL) criterion has been proposed by Biernacki et al. (2000) in the model-based clustering framework to select a relevant number of classes and has been used by statisticians in various application areas. A theoretical study of this criterion is proposed. A contrast related to the clustering objective is introduced: the conditional classification likelihood. This yields an estimator and a model selection criteria class. The properties of these new procedures are studied and ICL is proved to be an approximation of one of these criteria. We oppose these results to the current leading point of view about ICL, that it would not be consistent. Moreover these results give insights into the class notion underlying ICL and feed a reflection on the class notion in clustering. General results on penalized minimum contrast criteria and on mixture models are derived, which are interesting in their own right.

  1. A Hybrid Grey Based KOHONEN Model and Biogeography-Based Optimization for Project Portfolio Selection

    Directory of Open Access Journals (Sweden)

    Farshad Faezy Razi

    2014-01-01

    Full Text Available The problem of selection and the best option are the main subject of operation research science in decision-making theory. Selection is a process that scrutinizes and investigates several quantitative and qualitative, and most often incompatible, factors. One of the most fundamental management issues in multicriteria selection literature is the multicriteria adoption of the projects portfolio. In such decision-making condition, manager is seeking for the best combination to build up a portfolio among the existing projects. In the present paper, KOHONEN algorithm was first employed to build up a portfolio of the projects. Next, each portfolio was evaluated using grey relational analysis (GRA and then scheduled risk of the project was predicted using Mamdani fuzzy inference method. Finally, the multiobjective biogeography-based optimization algorithm was utilized for drawing risk and rank Pareto analysis. A case study is used concurrently to show the efficiency of the proposed model.

  2. Empirical Likelihood Based Variable Selection for Varying Coefficient Partially Linear Models with Censored Data

    Institute of Scientific and Technical Information of China (English)

    Peixin ZHAO

    2013-01-01

    In this paper,we consider the variable selection for the parametric components of varying coefficient partially linear models with censored data.By constructing a penalized auxiliary vector ingeniously,we propose an empirical likelihood based variable selection procedure,and show that it is consistent and satisfies the sparsity.The simulation studies show that the proposed variable selection method is workable.

  3. POSSIBILISTIC SHARPE RATIO BASED NOVICE PORTFOLIO SELECTION MODELS

    OpenAIRE

    Rupak Bhattacharyya

    2013-01-01

    This paper uses the concept of possibilistic risk aversion to propose a new approach for portfolio selection in fuzzy environment. Using possibility theory, the possibilistic mean, variance, standard deviation and risk premium of a fuzzy number are established. Possibilistic Sharpe ratio is defined as the ratio of possibilistic risk premium and possibilistic standard deviation of a portfolio. The Sharpe ratio is a measure of the performance of the portfolio compared to the risk...

  4. Selection bias in species distribution models: An econometric approach on forest trees based on structural modeling

    Science.gov (United States)

    Martin-StPaul, N. K.; Ay, J. S.; Guillemot, J.; Doyen, L.; Leadley, P.

    2014-12-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global changes on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of applications on forest trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8km). We also compared the outputs of the SSDM with outputs of a classical SDM (i.e. Biomod ensemble modelling) in terms of bioclimatic response curves and potential distributions under current climate and climate change scenarios. The shapes of the bioclimatic response curves and the modelled species distribution maps differed markedly between SSDM and classical SDMs, with contrasted patterns according to species and spatial resolutions. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents

  5. Parameter Selection and Performance Analysis of Mobile Terminal Models Based on Unity3D

    Institute of Scientific and Technical Information of China (English)

    KONG Li-feng; ZHAO Hai-ying; XU Guang-mei

    2014-01-01

    Mobile platform is now widely seen as a promising multimedia service with a favorable user group and market prospect. To study the influence of mobile terminal models on the quality of scene roaming, a parameter setting platform of mobile terminal models is established to select the parameter selection and performance index on different mobile platforms in this paper. This test platform is established based on model optimality principle, analyzing the performance curve of mobile terminals in different scene models and then deducing the external parameter of model establishment. Simulation results prove that the established test platform is able to analyze the parameter and performance matching list of a mobile terminal model.

  6. A genetic algorithm based global search strategy for population pharmacokinetic/pharmacodynamic model selection.

    Science.gov (United States)

    Sale, Mark; Sherer, Eric A

    2015-01-01

    The current algorithm for selecting a population pharmacokinetic/pharmacodynamic model is based on the well-established forward addition/backward elimination method. A central strength of this approach is the opportunity for a modeller to continuously examine the data and postulate new hypotheses to explain observed biases. This algorithm has served the modelling community well, but the model selection process has essentially remained unchanged for the last 30 years. During this time, more robust approaches to model selection have been made feasible by new technology and dramatic increases in computation speed. We review these methods, with emphasis on genetic algorithm approaches and discuss the role these methods may play in population pharmacokinetic/pharmacodynamic model selection.

  7. Location-based Mobile Relay Selection and Impact of Inaccurate Path Loss Model Parameters

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Madsen, Tatiana Kozlova; Schwefel, Hans-Peter

    2010-01-01

    In this paper we propose a relay selection scheme which uses collected location information together with a path loss model for relay selection, and analyze the performance impact of mobility and different error causes on this scheme. Performance is evaluated in terms of bit error rate...... in these situations. As the location-based scheme relies on a path loss model to estimate link qualities and select relays, the sensitivity with respect to inaccurate estimates of the unknown path loss model parameters is investigated. The parameter ranges that result in useful performance were found...

  8. Selection of Sinopec Lubricating Oil Producing Bases by Using the AHP Model

    Institute of Scientific and Technical Information of China (English)

    Song Yunchang; Song Zhaozheng; Zheng Chengguo; Jiang Qingzhe; Xu Chunming

    2007-01-01

    The factors affecting the development of Sinopec lubricating oil were analyzed in this paper,and an analytic hierarchy process (AHP) model for selecting lubricating-oil producing bases was developed. By using this model,nine lubricating oil producing companies under Sinopec were comprehensively evaluated. The evaluation result showed that the Maoming Lubricating Oil Company (Guangdong province),Jingmen Lubricating Oil Company (Hubei province) and Changcheng Lube Oil Company (Beijing) are top three choices,and should be developed preferentially for the development of Sinopec producing bases of lubricating oil in the future. The conclusions provide the theoretical basis for selecting lubricating oil producing bases for decision makers.

  9. Complex Behavior in a Selective Aging Neuron Model Based on Small World Networks

    Institute of Scientific and Technical Information of China (English)

    LUO Min-Jie; ZHANG Gui-Qing; LIU Qiu-Yu; CHEN Tian-Lun

    2008-01-01

    Complex behavior in a selective aging simple neuron model based on small world networks is investigated. The basic elements of the model are endowed with the main features of a neuron function. The structure of the selective aging neuron model is discussed. We also give some properties of the new network and find that the neuron model displays a power-law behavior. If the brain network is small world-like network, the mean avalanche size is almost the same unless the aging parameter is big enough.

  10. QSAR modeling for quinoxaline derivatives using genetic algorithm and simulated annealing based feature selection.

    Science.gov (United States)

    Ghosh, P; Bagchi, M C

    2009-01-01

    With a view to the rational design of selective quinoxaline derivatives, 2D and 3D-QSAR models have been developed for the prediction of anti-tubercular activities. Successful implementation of a predictive QSAR model largely depends on the selection of a preferred set of molecular descriptors that can signify the chemico-biological interaction. Genetic algorithm (GA) and simulated annealing (SA) are applied as variable selection methods for model development. 2D-QSAR modeling using GA or SA based partial least squares (GA-PLS and SA-PLS) methods identified some important topological and electrostatic descriptors as important factor for tubercular activity. Kohonen network and counter propagation artificial neural network (CP-ANN) considering GA and SA based feature selection methods have been applied for such QSAR modeling of Quinoxaline compounds. Out of a variable pool of 380 molecular descriptors, predictive QSAR models are developed for the training set and validated on the test set compounds and a comparative study of the relative effectiveness of linear and non-linear approaches has been investigated. Further analysis using 3D-QSAR technique identifies two models obtained by GA-PLS and SA-PLS methods leading to anti-tubercular activity prediction. The influences of steric and electrostatic field effects generated by the contribution plots are discussed. The results indicate that SA is a very effective variable selection approach for such 3D-QSAR modeling.

  11. Selecting representative climate models for climate change impact studies : An advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; ter Maat, Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.|info:eu-repo/dai/nl/290472113

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  12. Selecting representative climate models for climate change impact studies: an advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; Maat, ter Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  13. Selecting representative climate models for climate change impact studies : An advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; ter Maat, Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change impa

  14. Selecting representative climate models for climate change impact studies: an advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; Maat, ter Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change impa

  15. A model selection method for nonlinear system identification based FMRI effective connectivity analysis.

    Science.gov (United States)

    Li, Xingfeng; Coyle, Damien; Maguire, Liam; McGinnity, Thomas M; Benali, Habib

    2011-07-01

    In this paper a model selection algorithm for a nonlinear system identification method is proposed to study functional magnetic resonance imaging (fMRI) effective connectivity. Unlike most other methods, this method does not need a pre-defined structure/model for effective connectivity analysis. Instead, it relies on selecting significant nonlinear or linear covariates for the differential equations to describe the mapping relationship between brain output (fMRI response) and input (experiment design). These covariates, as well as their coefficients, are estimated based on a least angle regression (LARS) method. In the implementation of the LARS method, Akaike's information criterion corrected (AICc) algorithm and the leave-one-out (LOO) cross-validation method were employed and compared for model selection. Simulation comparison between the dynamic causal model (DCM), nonlinear identification method, and model selection method for modelling the single-input-single-output (SISO) and multiple-input multiple-output (MIMO) systems were conducted. Results show that the LARS model selection method is faster than DCM and achieves a compact and economic nonlinear model simultaneously. To verify the efficacy of the proposed approach, an analysis of the dorsal and ventral visual pathway networks was carried out based on three real datasets. The results show that LARS can be used for model selection in an fMRI effective connectivity study with phase-encoded, standard block, and random block designs. It is also shown that the LOO cross-validation method for nonlinear model selection has less residual sum squares than the AICc algorithm for the study.

  16. Hybrid Modeling of Flotation Height in Air Flotation Oven Based on Selective Bagging Ensemble Method

    Directory of Open Access Journals (Sweden)

    Shuai Hou

    2013-01-01

    Full Text Available The accurate prediction of the flotation height is very necessary for the precise control of the air flotation oven process, therefore, avoiding the scratch and improving production quality. In this paper, a hybrid flotation height prediction model is developed. Firstly, a simplified mechanism model is introduced for capturing the main dynamic behavior of the process. Thereafter, for compensation of the modeling errors existing between actual system and mechanism model, an error compensation model which is established based on the proposed selective bagging ensemble method is proposed for boosting prediction accuracy. In the framework of the selective bagging ensemble method, negative correlation learning and genetic algorithm are imposed on bagging ensemble method for promoting cooperation property between based learners. As a result, a subset of base learners can be selected from the original bagging ensemble for composing a selective bagging ensemble which can outperform the original one in prediction accuracy with a compact ensemble size. Simulation results indicate that the proposed hybrid model has a better prediction performance in flotation height than other algorithms’ performance.

  17. Supplier selection based on a neural network model using genetic algorithm.

    Science.gov (United States)

    Golmohammadi, Davood; Creese, Robert C; Valian, Haleh; Kolassa, John

    2009-09-01

    In this paper, a decision-making model was developed to select suppliers using neural networks (NNs). This model used historical supplier performance data for selection of vendor suppliers. Input and output were designed in a unique manner for training purposes. The managers' judgments about suppliers were simulated by using a pairwise comparisons matrix for output estimation in the NN. To obtain the benefit of a search technique for model structure and training, genetic algorithm (GA) was applied for the initial weights and architecture of the network. The suppliers' database information (input) can be updated over time to change the suppliers' score estimation based on their performance. The case study illustrated shows how the model can be applied for suppliers' selection.

  18. Selection Methodology of Energy Consumption Model Based on Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    Nakhodov Vladimir

    2016-12-01

    Full Text Available The energy efficiency monitoring methods in industry are based on statistical modeling of energy consumption. In the present paper, the widely used method of energy efficiency monitoring “Monitoring and Targeting systems” has been considered, highlighting one of the most important issues — selection of the proper mathematical model of energy consumption. The paper gives a list of different models that can be applied in the corresponding systems. The numbers of criteria that estimate certain characteristics of the mathematical model are represented. The traditional criteria of model adequacy and the “additional” criteria, which allow estimating the model characteristics more precisely, are proposed for choosing the mathematical model of energy consumption in “Monitoring and Targeting systems”. In order to provide the comparison of different models by several criteria simultaneously, an approach based on Data Envelopment Analysis is proposed. Such approach allows providing a more accurate and reliable energy efficiency monitoring.

  19. On dynamic selection of households for direct marketing based on Markov chain models with memory

    NARCIS (Netherlands)

    Otter, Pieter W.

    2007-01-01

    A simple, dynamic selection procedure is proposed, based on conditional, expected profits using Markov chain models with memory. The method is easy to apply, only frequencies and mean values have to be calculated or estimated. The method is empirically illustrated using a data set from a charitable

  20. Sensor selection of helicopter transmission systems based on physical model and sensitivity analysis

    Institute of Scientific and Technical Information of China (English)

    Lyu Kehong; Tan Xiaodong; Liu Guanjun; Zhao Chenxu

    2014-01-01

    In the helicopter transmission systems, it is important to monitor and track the tooth damage evolution using lots of sensors and detection methods. This paper develops a novel approach for sensor selection based on physical model and sensitivity analysis. Firstly, a physical model of tooth damage and mesh stiffness is built. Secondly, some effective condition indicators (CIs) are presented, and the optimal CIs set is selected by comparing their test statistics according to Mann-Kendall test. Afterwards, the selected CIs are used to generate a health indicator (HI) through sen slop estimator. Then, the sensors are selected according to the monotonic relevance and sensitivity to the damage levels. Finally, the proposed method is verified by the simulation and experimental data. The results show that the approach can provide a guide for health monitor-ing of helicopter transmission systems, and it is effective to reduce the test cost and improve the system’s reliability.

  1. [Location selection for Shenyang urban parks based on GIS and multi-objective location allocation model].

    Science.gov (United States)

    Zhou, Yuan; Shi, Tie-Mao; Hu, Yuan-Man; Gao, Chang; Liu, Miao; Song, Lin-Qi

    2011-12-01

    Based on geographic information system (GIS) technology and multi-objective location-allocation (LA) model, and in considering of four relatively independent objective factors (population density level, air pollution level, urban heat island effect level, and urban land use pattern), an optimized location selection for the urban parks within the Third Ring of Shenyang was conducted, and the selection results were compared with the spatial distribution of existing parks, aimed to evaluate the rationality of the spatial distribution of urban green spaces. In the location selection of urban green spaces in the study area, the factor air pollution was most important, and, compared with single objective factor, the weighted analysis results of multi-objective factors could provide optimized spatial location selection of new urban green spaces. The combination of GIS technology with LA model would be a new approach for the spatial optimizing of urban green spaces.

  2. Sensor selection of helicopter transmission systems based on physical model and sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Lyu Kehong

    2014-06-01

    Full Text Available In the helicopter transmission systems, it is important to monitor and track the tooth damage evolution using lots of sensors and detection methods. This paper develops a novel approach for sensor selection based on physical model and sensitivity analysis. Firstly, a physical model of tooth damage and mesh stiffness is built. Secondly, some effective condition indicators (CIs are presented, and the optimal CIs set is selected by comparing their test statistics according to Mann–Kendall test. Afterwards, the selected CIs are used to generate a health indicator (HI through sen slop estimator. Then, the sensors are selected according to the monotonic relevance and sensitivity to the damage levels. Finally, the proposed method is verified by the simulation and experimental data. The results show that the approach can provide a guide for health monitoring of helicopter transmission systems, and it is effective to reduce the test cost and improve the system’s reliability.

  3. Improved social force model based on exit selection for microscopic pedestrian simulation in subway station

    Institute of Scientific and Technical Information of China (English)

    郑勋; 李海鹰; 孟令云; 许心越; 陈旭

    2015-01-01

    An improved social force model based on exit selection is proposed to simulate pedestrians’ microscopic behaviors in subway station. The modification lies in considering three factors of spatial distance, occupant density and exit width. In addition, the problem of pedestrians selecting exit frequently is solved as follows: not changing to other exits in the affected area of one exit, using the probability of remaining preceding exit and invoking function of exit selection after several simulation steps. Pedestrians in subway station have some special characteristics, such as explicit destinations, different familiarities with subway station. Finally, Beijing Zoo Subway Station is taken as an example and the feasibility of the model results is verified through the comparison of the actual data and simulation data. The simulation results show that the improved model can depict the microscopic behaviors of pedestrians in subway station.

  4. CC-PSM: A Preference-Aware Selection Model for Cloud Service Based on Consumer Community

    Directory of Open Access Journals (Sweden)

    Yan Wang

    2015-01-01

    Full Text Available In order to give full consideration to the consumer’s personal preference in cloud service selection strategies and improve the credibility of service prediction, a preference-aware cloud service selection model based on consumer community (CC-PSM is presented in this work. The objective of CC-PSM is to select a service meeting a target consumer’s demands and preference. Firstly, the correlation between cloud consumers from a bipartite network for service selection is mined to compute the preference similarity between them. Secondly, an improved hierarchical clustering algorithm is designed to discover the consumer community with similar preferences so as to form the trusted groups for service recommendation. In the clustering process, a quantization function called community degree is given to evaluate the quality of community structure. Thirdly, a prediction model based on consumer community is built to predict a consumer’s evaluation on an unknown service. The experimental results show that CC-PSM can effectively partition the consumers based on their preferences and has good effectiveness in service selection applications.

  5. A MCDM-based model for vendor selection: a case study in the particleboard industry

    Institute of Scientific and Technical Information of China (English)

    Reza Zanjirani Farahani; Moslem Fadaei

    2012-01-01

    We investigated procurement of raw materials for particleboard to minimize costs and develop an efficient optimization model for product mix.In a multiple-vendor market,vendors must be evaluated based on specified criteria.Assuming sourcing from the highest-scoring vendors,annual purchase quantities are then planned.To meet procurement needs,we first propose a model to describe the problem.Then,an appropriate multi-criteria decision making (MCDM) technique is selected to solve it.We ran the model using commercial software such as LINGO(R) and then compared the model results to a real case involving one of the largest particleboard manufacturers in the region.The model mn based real data yielded a procurement program that is more efficient and lower in cost than the program currently in use.Use of this procurement modelling approach would yield considerable financial returns.

  6. Trust-Enhanced Cloud Service Selection Model Based on QoS Analysis.

    Science.gov (United States)

    Pan, Yuchen; Ding, Shuai; Fan, Wenjuan; Li, Jing; Yang, Shanlin

    2015-01-01

    Cloud computing technology plays a very important role in many areas, such as in the construction and development of the smart city. Meanwhile, numerous cloud services appear on the cloud-based platform. Therefore how to how to select trustworthy cloud services remains a significant problem in such platforms, and extensively investigated owing to the ever-growing needs of users. However, trust relationship in social network has not been taken into account in existing methods of cloud service selection and recommendation. In this paper, we propose a cloud service selection model based on the trust-enhanced similarity. Firstly, the direct, indirect, and hybrid trust degrees are measured based on the interaction frequencies among users. Secondly, we estimate the overall similarity by combining the experience usability measured based on Jaccard's Coefficient and the numerical distance computed by Pearson Correlation Coefficient. Then through using the trust degree to modify the basic similarity, we obtain a trust-enhanced similarity. Finally, we utilize the trust-enhanced similarity to find similar trusted neighbors and predict the missing QoS values as the basis of cloud service selection and recommendation. The experimental results show that our approach is able to obtain optimal results via adjusting parameters and exhibits high effectiveness. The cloud services ranking by our model also have better QoS properties than other methods in the comparison experiments.

  7. Efficient feature selection and multiclass classification with integrated instance and model based learning.

    Science.gov (United States)

    Liu, Zhenqiu; Bensmail, Halima; Tan, Ming

    2012-01-01

    Multiclass classification and feature (variable) selections are commonly encountered in many biological and medical applications. However, extending binary classification approaches to multiclass problems is not trivial. Instance-based methods such as the K nearest neighbor (KNN) can naturally extend to multiclass problems and usually perform well with unbalanced data, but suffer from the curse of dimensionality. Their performance is degraded when applied to high dimensional data. On the other hand, model-based methods such as logistic regression require the decomposition of the multiclass problem into several binary problems with one-vs.-one or one-vs.-rest schemes. Even though they can be applied to high dimensional data with L(1) or L(p) penalized methods, such approaches can only select independent features and the features selected with different binary problems are usually different. They also produce unbalanced classification problems with one vs. the rest scheme even if the original multiclass problem is balanced.By combining instance-based and model-based learning, we propose an efficient learning method with integrated KNN and constrained logistic regression (KNNLog) for simultaneous multiclass classification and feature selection. Our proposed method simultaneously minimizes the intra-class distance and maximizes the interclass distance with fewer estimated parameters. It is very efficient for problems with small sample size and unbalanced classes, a case common in many real applications. In addition, our model-based feature selection methods can identify highly correlated features simultaneously avoiding the multiplicity problem due to multiple tests. The proposed method is evaluated with simulation and real data including one unbalanced microRNA dataset for leukemia and one multiclass metagenomic dataset from the Human Microbiome Project (HMP). It performs well with limited computational experiments.

  8. Optimizing selection of bridge raft based on fuzzy matter-element model and combination weighting

    Institute of Scientific and Technical Information of China (English)

    Li Feng; Shao Fei; Wang Jianping; Li Zhigang

    2012-01-01

    It' s a necessary selection to support the maneuver across Yangtze River by floating bridge constructed by portable steel bridge and civilian ships. It is a comprehensive index for the scheme of bridge raft, containing a variety of technical factors and uncertainties. The optimization is the selection in the constructing time, quantity of equipments and man power. Based on the calculation result of bridge rafts, an evaluating system is established, consisting of index of spacing between interior bays, raft length, truss numbers, operation difficulty and maximal bending stress. A fuzzy matter element model of optimizing selection of bridge rafts was built up by combining quantitative analysis with qualitative analysis. The method of combination weighting was used to calculate the value of weights index to reduce the subjective randomness. The sequence of schemes and the optimization resuh were gained finally based on euclid approach degree. The application result shows that it is simple and practical.

  9. A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model

    Directory of Open Access Journals (Sweden)

    Michael Pfaffermayr

    2014-10-01

    Full Text Available The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007, the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.

  10. SVM Based Descriptor Selection and Classification of Neurodegenerative Disease Drugs for Pharmacological Modeling.

    Science.gov (United States)

    Shahid, Mohammad; Shahzad Cheema, Muhammad; Klenner, Alexander; Younesi, Erfan; Hofmann-Apitius, Martin

    2013-03-01

    Systems pharmacological modeling of drug mode of action for the next generation of multitarget drugs may open new routes for drug design and discovery. Computational methods are widely used in this context amongst which support vector machines (SVM) have proven successful in addressing the challenge of classifying drugs with similar features. We have applied a variety of such SVM-based approaches, namely SVM-based recursive feature elimination (SVM-RFE). We use the approach to predict the pharmacological properties of drugs widely used against complex neurodegenerative disorders (NDD) and to build an in-silico computational model for the binary classification of NDD drugs from other drugs. Application of an SVM-RFE model to a set of drugs successfully classified NDD drugs from non-NDD drugs and resulted in overall accuracy of ∼80 % with 10 fold cross validation using 40 top ranked molecular descriptors selected out of total 314 descriptors. Moreover, SVM-RFE method outperformed linear discriminant analysis (LDA) based feature selection and classification. The model reduced the multidimensional descriptors space of drugs dramatically and predicted NDD drugs with high accuracy, while avoiding over fitting. Based on these results, NDD-specific focused libraries of drug-like compounds can be designed and existing NDD-specific drugs can be characterized by a well-characterized set of molecular descriptors.

  11. Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm

    Directory of Open Access Journals (Sweden)

    Zhehuang Huang

    2015-01-01

    Full Text Available Artificial fish swarm algorithm (AFSA is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  12. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    Science.gov (United States)

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  13. A QFD-based decision making model for computer-aided design software selection

    Directory of Open Access Journals (Sweden)

    Kanika Prasad

    2016-03-01

    Full Text Available With the progress in technology and innovation in product development, the contribution of computer- aided design (CAD software in the design and manufacture of parts/products is growing on significantly. Selection of an appropriate CAD software is not a trifling task as it involves analyzing the appositeness of the available software packages to the unique requirements of the organization. Existence of a large number of CAD software vendors, presence of discordance among different hardware and software systems, and dearth of technical knowledge and experience of the decision makers further complicate the selection procedure. Moreover, there are very few published research papers related to CAD software selection, and majority of them have either employed criteria weights computed utilizing subjective judgements of the end users or floundered to incorporate the voice of customers in the decision making process. Quality function deployment (QFD is a well-known technique for determining the relative importance of customers’ defined criteria for selection of any product or service. Therefore, this paper deals with design and development of a QFD-based decision making model in Visual BASIC 6.0 for selection of CAD software for manufacturing organizations. In order to demonstrate the applicability and potentiality of the developed model in the form of a software prototype, two illustrative examples are also provided.

  14. Test selection and optimization for PHM based on failure evolution mechanism model

    Institute of Scientific and Technical Information of China (English)

    Jing Qiu; Xiaodong Tan; Guanjun Liu; Kehong LÜ

    2013-01-01

    The test selection and optimization (TSO) can improve the abilities of fault diagnosis, prognosis and health-state evalua-tion for prognostics and health management (PHM) systems. Tra-ditional y, TSO mainly focuses on fault detection and isolation, but they cannot provide an effective guide for the design for testa-bility (DFT) to improve the PHM performance level. To solve the problem, a model of TSO for PHM systems is proposed. Firstly, through integrating the characteristics of fault severity and propa-gation time, and analyzing the test timing and sensitivity, a testabi-lity model based on failure evolution mechanism model (FEMM) for PHM systems is built up. This model describes the fault evolution-test dependency using the fault-symptom parameter matrix and symptom parameter-test matrix. Secondly, a novel method of in-herent testability analysis for PHM systems is developed based on the above information. Having completed the analysis, a TSO model, whose objective is to maximize fault trackability and mini-mize the test cost, is proposed through inherent testability analysis results, and an adaptive simulated annealing genetic algorithm (ASAGA) is introduced to solve the TSO problem. Final y, a case of a centrifugal pump system is used to verify the feasibility and effectiveness of the proposed models and methods. The results show that the proposed technology is important for PHM systems to select and optimize the test set in order to improve their perfor-mance level.

  15. Development of a model selection method based on the reliability of a soft sensor model

    Directory of Open Access Journals (Sweden)

    Takeshi Okada,

    2012-04-01

    Full Text Available Soft sensors are widely used to realize highly efficient operation in chemical process because every important variablesuch as product quality is not measured online. By using soft sensors, such a difficult-to-measure variable y can be estimatedby other process variables which are measured online. In order to estimate values of y without degradation of a soft sensormodel, a time difference (TD model was proposed previously. Though a TD model has high predictive ability, the model doesnot function well when process conditions have never been observed. To cope with this problem, a soft sensor model can beupdated with newest data. But updating a model needs time and effort for plant operators. We therefore developed an onlinemonitoring system to judge whether a TD model can predict values of y accurately or an updating model should be used forboth reducing maintenance cost and improving predictive accuracy of soft sensors. The monitoring system is based onsupport vector machine or standard deviation of y-values estimated from various intervals of time difference. We confirmedthat the proposed system has functioned successfully through the analysis of real industrial data of a distillation process.

  16. A mixture model-based strategy for selecting sets of genes in multiclass response microarray experiments.

    Science.gov (United States)

    Broët, Philippe; Lewin, Alex; Richardson, Sylvia; Dalmasso, Cyril; Magdelenat, Henri

    2004-11-01

    Multiclass response (MCR) experiments are those in which there are more than two classes to be compared. In these experiments, though the null hypothesis is simple, there are typically many patterns of gene expression changes across the different classes that led to complex alternatives. In this paper, we propose a new strategy for selecting genes in MCR that is based on a flexible mixture model for the marginal distribution of a modified F-statistic. Using this model, false positive and negative discovery rates can be estimated and combined to produce a rule for selecting a subset of genes. Moreover, the method proposed allows calculation of these rates for any predefined subset of genes. We illustrate the performance our approach using simulated datasets and a real breast cancer microarray dataset. In this latter study, we investigate predefined subset of genes and point out interesting differences between three distinct biological pathways. http://www.bgx.org.uk/software.html

  17. Fractional vegetation cover estimation based on an improved selective endmember spectral mixture model.

    Directory of Open Access Journals (Sweden)

    Ying Li

    Full Text Available Vegetation is an important part of ecosystem and estimation of fractional vegetation cover is of significant meaning to monitoring of vegetation growth in a certain region. With Landsat TM images and HJ-1B images as data source, an improved selective endmember linear spectral mixture model (SELSMM was put forward in this research to estimate the fractional vegetation cover in Huangfuchuan watershed in China. We compared the result with the vegetation coverage estimated with linear spectral mixture model (LSMM and conducted accuracy test on the two results with field survey data to study the effectiveness of different models in estimation of vegetation coverage. Results indicated that: (1 the RMSE of the estimation result of SELSMM based on TM images is the lowest, which is 0.044. The RMSEs of the estimation results of LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.052, 0.077 and 0.082, which are all higher than that of SELSMM based on TM images; (2 the R2 of SELSMM based on TM images, LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.668, 0.531, 0.342 and 0.336. Among these models, SELSMM based on TM images has the highest estimation accuracy and also the highest correlation with measured vegetation coverage. Of the two methods tested, SELSMM is superior to LSMM in estimation of vegetation coverage and it is also better at unmixing mixed pixels of TM images than pixels of HJ-1B images. So, the SELSMM based on TM images is comparatively accurate and reliable in the research of regional fractional vegetation cover estimation.

  18. Fractional vegetation cover estimation based on an improved selective endmember spectral mixture model.

    Science.gov (United States)

    Li, Ying; Wang, Hong; Li, Xiao Bing

    2015-01-01

    Vegetation is an important part of ecosystem and estimation of fractional vegetation cover is of significant meaning to monitoring of vegetation growth in a certain region. With Landsat TM images and HJ-1B images as data source, an improved selective endmember linear spectral mixture model (SELSMM) was put forward in this research to estimate the fractional vegetation cover in Huangfuchuan watershed in China. We compared the result with the vegetation coverage estimated with linear spectral mixture model (LSMM) and conducted accuracy test on the two results with field survey data to study the effectiveness of different models in estimation of vegetation coverage. Results indicated that: (1) the RMSE of the estimation result of SELSMM based on TM images is the lowest, which is 0.044. The RMSEs of the estimation results of LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.052, 0.077 and 0.082, which are all higher than that of SELSMM based on TM images; (2) the R2 of SELSMM based on TM images, LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.668, 0.531, 0.342 and 0.336. Among these models, SELSMM based on TM images has the highest estimation accuracy and also the highest correlation with measured vegetation coverage. Of the two methods tested, SELSMM is superior to LSMM in estimation of vegetation coverage and it is also better at unmixing mixed pixels of TM images than pixels of HJ-1B images. So, the SELSMM based on TM images is comparatively accurate and reliable in the research of regional fractional vegetation cover estimation.

  19. Customer Order Decoupling Point Selection Model in Mass Customization Based on MAS

    Institute of Scientific and Technical Information of China (English)

    XU Xuanguo; LI Xiangyang

    2006-01-01

    Mass customization relates to the ability of providing individually designed products or services to customer with high process flexibility or integration. Literatures on mass customization have been focused on mechanism of MC, but little on customer order decoupling point selection. The aim of this paper is to present a model for customer order decoupling point selection of domain knowledge interactions between enterprises and customers in mass customization. Based on the analysis of other researchers' achievements combining the demand problems of customer and enterprise, a model of group decision for customer order decoupling point selection is constructed based on quality function deployment and multi-agent system. Considering relatively the decision makers of independent functional departments as independent decision agents, a decision agent set is added as the third dimensionality to house of quality, the cubic quality function deployment is formed. The decision-making can be consisted of two procedures: the first one is to build each plane house of quality in various functional departments to express each opinions; the other is to evaluate and gather the foregoing sub-decisions by a new plane quality function deployment. Thus, department decision-making can well use its domain knowledge by ontology, and total decision-making can keep simple by avoiding too many customer requirements.

  20. Research on Methods for Discovering and Selecting Cloud Infrastructure Services Based on Feature Modeling

    Directory of Open Access Journals (Sweden)

    Huamin Zhu

    2016-01-01

    Full Text Available Nowadays more and more cloud infrastructure service providers are providing large numbers of service instances which are a combination of diversified resources, such as computing, storage, and network. However, for cloud infrastructure services, the lack of a description standard and the inadequate research of systematic discovery and selection methods have exposed difficulties in discovering and choosing services for users. First, considering the highly configurable properties of a cloud infrastructure service, the feature model method is used to describe such a service. Second, based on the description of the cloud infrastructure service, a systematic discovery and selection method for cloud infrastructure services are proposed. The automatic analysis techniques of the feature model are introduced to verify the model’s validity and to perform the matching of the service and demand models. Finally, we determine the critical decision metrics and their corresponding measurement methods for cloud infrastructure services, where the subjective and objective weighting results are combined to determine the weights of the decision metrics. The best matching instances from various providers are then ranked by their comprehensive evaluations. Experimental results show that the proposed methods can effectively improve the accuracy and efficiency of cloud infrastructure service discovery and selection.

  1. Defining new criteria for selection of cell-based intestinal models using publicly available databases

    Directory of Open Access Journals (Sweden)

    Christensen Jon

    2012-06-01

    Full Text Available Abstract Background The criteria for choosing relevant cell lines among a vast panel of available intestinal-derived lines exhibiting a wide range of functional properties are still ill-defined. The objective of this study was, therefore, to establish objective criteria for choosing relevant cell lines to assess their appropriateness as tumor models as well as for drug absorption studies. Results We made use of publicly available expression signatures and cell based functional assays to delineate differences between various intestinal colon carcinoma cell lines and normal intestinal epithelium. We have compared a panel of intestinal cell lines with patient-derived normal and tumor epithelium and classified them according to traits relating to oncogenic pathway activity, epithelial-mesenchymal transition (EMT and stemness, migratory properties, proliferative activity, transporter expression profiles and chemosensitivity. For example, SW480 represent an EMT-high, migratory phenotype and scored highest in terms of signatures associated to worse overall survival and higher risk of recurrence based on patient derived databases. On the other hand, differentiated HT29 and T84 cells showed gene expression patterns closest to tumor bulk derived cells. Regarding drug absorption, we confirmed that differentiated Caco-2 cells are the model of choice for active uptake studies in the small intestine. Regarding chemosensitivity we were unable to confirm a recently proposed association of chemo-resistance with EMT traits. However, a novel signature was identified through mining of NCI60 GI50 values that allowed to rank the panel of intestinal cell lines according to their drug responsiveness to commonly used chemotherapeutics. Conclusions This study presents a straightforward strategy to exploit publicly available gene expression data to guide the choice of cell-based models. While this approach does not overcome the major limitations of such models

  2. Prediction of selected Indian stock using a partitioning–interpolation based ARIMA–GARCH model

    Directory of Open Access Journals (Sweden)

    C. Narendra Babu

    2015-07-01

    Full Text Available Accurate long-term prediction of time series data (TSD is a very useful research challenge in diversified fields. As financial TSD are highly volatile, multi-step prediction of financial TSD is a major research problem in TSD mining. The two challenges encountered are, maintaining high prediction accuracy and preserving the data trend across the forecast horizon. The linear traditional models such as autoregressive integrated moving average (ARIMA and generalized autoregressive conditional heteroscedastic (GARCH preserve data trend to some extent, at the cost of prediction accuracy. Non-linear models like ANN maintain prediction accuracy by sacrificing data trend. In this paper, a linear hybrid model, which maintains prediction accuracy while preserving data trend, is proposed. A quantitative reasoning analysis justifying the accuracy of proposed model is also presented. A moving-average (MA filter based pre-processing, partitioning and interpolation (PI technique are incorporated by the proposed model. Some existing models and the proposed model are applied on selected NSE India stock market data. Performance results show that for multi-step ahead prediction, the proposed model outperforms the others in terms of both prediction accuracy and preserving data trend.

  3. Regression Test-Selection Technique Using Component Model Based Modification: Code to Test Traceability

    Directory of Open Access Journals (Sweden)

    Ahmad A. Saifan

    2016-04-01

    Full Text Available Regression testing is a safeguarding procedure to validate and verify adapted software, and guarantee that no errors have emerged. However, regression testing is very costly when testers need to re-execute all the test cases against the modified software. This paper proposes a new approach in regression test selection domain. The approach is based on meta-models (test models and structured models to decrease the number of test cases to be used in the regression testing process. The approach has been evaluated using three Java applications. To measure the effectiveness of the proposed approach, we compare the results using the re-test to all approaches. The results have shown that our approach reduces the size of test suite without negative impact on the effectiveness of the fault detection.

  4. Neural-Based Pattern Matching for Selection of Biophysical Model Meteorological Forcings

    Science.gov (United States)

    Coleman, A. M.; Wigmosta, M. S.; Li, H.; Venteris, E. R.; Skaggs, R. J.

    2011-12-01

    matching method using neural-network based Self-Organizing Maps (SOM) and GIS-based spatial modeling. This method pattern matches long-term mean monthly meteorology at an individual site to a series of CLIGEN stations within a user-defined proximal distance. The time-series data signatures of the selected stations are competed against one another using a SOM-generated similarity metric to determine the closest pattern match to the spatially distributed PRISM meteorology at the site of interest. This method overcomes issues with topographic dispersion of meteorology stations and existence of microclimates where the nearest meteorology station may not be the most representative.

  5. Peer selecting model based on FCM for wireless distributed P2P files sharing systems

    Institute of Scientific and Technical Information of China (English)

    LI Xi; JI Hong; ZHENG Rui-ming

    2010-01-01

    Ⅱn order to improve the performance of wireless distributed peer-to-peer(P2P)files sharing systems,a general system architecture and a novel peer selecting model based on fuzzy cognitive maps(FCM)are proposed in this paper.The new model provides an effective approach on choosing an optimal peer from several resource discovering results for the best file transfer.Compared with the traditional rain-hops scheme that uses hops as the only selecting criterion,the proposed model uses FCM to investigate the complex relationships among various relative factors in wireless environments and gives an overall evaluation score on the candidate.It also has strong scalability for being independent of specified P2P resource discovering protocols.Furthermore,a complete implementation is explained in concrete modules.The simulation results show that the proposed model is effective and feasible compared with rain-hops scheme,with the success transfer rate increased by at least20% and transfer time improved as high as 34%.

  6. A Quality Function Deployment-Based Model for Cutting Fluid Selection

    Directory of Open Access Journals (Sweden)

    Kanika Prasad

    2016-01-01

    Full Text Available Cutting fluid is applied for numerous reasons while machining a workpiece, like increasing tool life, minimizing workpiece thermal deformation, enhancing surface finish, flushing away chips from cutting surface, and so on. Hence, choosing a proper cutting fluid for a specific machining application becomes important for enhanced efficiency and effectiveness of a manufacturing process. Cutting fluid selection is a complex procedure as the decision depends on many complicated interactions, including work material’s machinability, rigorousness of operation, cutting tool material, metallurgical, chemical, and human compatibility, reliability and stability of fluid, and cost. In this paper, a decision making model is developed based on quality function deployment technique with a view to respond to the complex character of cutting fluid selection problem and facilitate judicious selection of cutting fluid from a comprehensive list of available alternatives. In the first example, HD-CUTSOL is recognized as the most suitable cutting fluid for drilling holes in titanium alloy with tungsten carbide tool and in the second example, for performing honing operation on stainless steel alloy with cubic boron nitride tool, CF5 emerges out as the best honing fluid. Implementation of this model would result in cost reduction through decreased manpower requirement, enhanced workforce efficiency, and efficient information exploitation.

  7. Markov Chain Model-Based Optimal Cluster Heads Selection for Wireless Sensor Networks

    Science.gov (United States)

    Ahmed, Gulnaz; Zou, Jianhua; Zhao, Xi; Sadiq Fareed, Mian Muhammad

    2017-01-01

    The longer network lifetime of Wireless Sensor Networks (WSNs) is a goal which is directly related to energy consumption. This energy consumption issue becomes more challenging when the energy load is not properly distributed in the sensing area. The hierarchal clustering architecture is the best choice for these kind of issues. In this paper, we introduce a novel clustering protocol called Markov chain model-based optimal cluster heads (MOCHs) selection for WSNs. In our proposed model, we introduce a simple strategy for the optimal number of cluster heads selection to overcome the problem of uneven energy distribution in the network. The attractiveness of our model is that the BS controls the number of cluster heads while the cluster heads control the cluster members in each cluster in such a restricted manner that a uniform and even load is ensured in each cluster. We perform an extensive range of simulation using five quality measures, namely: the lifetime of the network, stable and unstable region in the lifetime of the network, throughput of the network, the number of cluster heads in the network, and the transmission time of the network to analyze the proposed model. We compare MOCHs against Sleep-awake Energy Efficient Distributed (SEED) clustering, Artificial Bee Colony (ABC), Zone Based Routing (ZBR), and Centralized Energy Efficient Clustering (CEEC) using the above-discussed quality metrics and found that the lifetime of the proposed model is almost 1095, 2630, 3599, and 2045 rounds (time steps) greater than SEED, ABC, ZBR, and CEEC, respectively. The obtained results demonstrate that the MOCHs is better than SEED, ABC, ZBR, and CEEC in terms of energy efficiency and the network throughput. PMID:28241492

  8. Performance of criteria for selecting evolutionary models in phylogenetics: a comprehensive study based on simulated datasets

    OpenAIRE

    Luo Arong; Qiao Huijie; Zhang Yanzhou; Shi Weifeng; Ho Simon YW; Xu Weijun; Zhang Aibing; Zhu Chaodong

    2010-01-01

    Abstract Background Explicit evolutionary models are required in maximum-likelihood and Bayesian inference, the two methods that are overwhelmingly used in phylogenetic studies of DNA sequence data. Appropriate selection of nucleotide substitution models is important because the use of incorrect models can mislead phylogenetic inference. To better understand the performance of different model-selection criteria, we used 33,600 simulated data sets to analyse the accuracy, precision, dissimilar...

  9. A Natural Language Processing-based Model to Automate MRI Brain Protocol Selection and Prioritization.

    Science.gov (United States)

    Brown, Andrew D; Marotta, Thomas R

    2017-02-01

    Incorrect imaging protocol selection can contribute to increased healthcare cost and waste. To help healthcare providers improve the quality and safety of medical imaging services, we developed and evaluated three natural language processing (NLP) models to determine whether NLP techniques could be employed to aid in clinical decision support for protocoling and prioritization of magnetic resonance imaging (MRI) brain examinations. To test the feasibility of using an NLP model to support clinical decision making for MRI brain examinations, we designed three different medical imaging prediction tasks, each with a unique outcome: selecting an examination protocol, evaluating the need for contrast administration, and determining priority. We created three models for each prediction task, each using a different classification algorithm-random forest, support vector machine, or k-nearest neighbor-to predict outcomes based on the narrative clinical indications and demographic data associated with 13,982 MRI brain examinations performed from January 1, 2013 to June 30, 2015. Test datasets were used to calculate the accuracy, sensitivity and specificity, predictive values, and the area under the curve. Our optimal results show an accuracy of 82.9%, 83.0%, and 88.2% for the protocol selection, contrast administration, and prioritization tasks, respectively, demonstrating that predictive algorithms can be used to aid in clinical decision support for examination protocoling. NLP models developed from the narrative clinical information provided by referring clinicians and demographic data are feasible methods to predict the protocol and priority of MRI brain examinations. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  10. Designing Organizational Effectiveness Model of Selected Iraq’s Sporting Federations Based on Competing Values Framework

    Directory of Open Access Journals (Sweden)

    Hossein Eydi

    2013-01-01

    Full Text Available The aim of the present study was designing effectiveness model of selected Iraq sport federations based on competing values framework. Statistical society of present study included 221 subjects ranging from chairmen, expert staffs, national adolescent athletes, and national referees. 180 subjects (81.4 percent answered standard questionnaire of Eydi et al (2011 with five Likert values scale. Content and face validity of this tool was confirmed by 12 academic professors and its reliability was validated by Cronbach's alpha (r = 0.97. Results of Structural Equation Model (SEM based on path analysis method showed that factors of expert human resources(0.88, organizational interaction (0.88, productivity (0.87, employees' cohesion (0.84, planning (0.84, organizational stability (0.81, flexibility (0.78, and organizational resources (0.74 had the most effects on organizational effectiveness.Also, findings of factor analysis showed that patterns of internal procedures and rational goals were main patterns of competing values framework and determinants of organizational effectiveness in Iraq's selected sport federations. Moreover, federations of football, track and field, weightlifting, and basketball had the highest mean of organizational effectiveness, respectively. Hence, Iraq sport federations mainly focused on organizational control, and internal attention as index of OE.

  11. Model Selection for Geostatistical Models

    Energy Technology Data Exchange (ETDEWEB)

    Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.

    2006-02-01

    We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.

  12. Performance of criteria for selecting evolutionary models in phylogenetics: a comprehensive study based on simulated datasets

    Directory of Open Access Journals (Sweden)

    Luo Arong

    2010-08-01

    Full Text Available Abstract Background Explicit evolutionary models are required in maximum-likelihood and Bayesian inference, the two methods that are overwhelmingly used in phylogenetic studies of DNA sequence data. Appropriate selection of nucleotide substitution models is important because the use of incorrect models can mislead phylogenetic inference. To better understand the performance of different model-selection criteria, we used 33,600 simulated data sets to analyse the accuracy, precision, dissimilarity, and biases of the hierarchical likelihood-ratio test, Akaike information criterion, Bayesian information criterion, and decision theory. Results We demonstrate that the Bayesian information criterion and decision theory are the most appropriate model-selection criteria because of their high accuracy and precision. Our results also indicate that in some situations different models are selected by different criteria for the same dataset. Such dissimilarity was the highest between the hierarchical likelihood-ratio test and Akaike information criterion, and lowest between the Bayesian information criterion and decision theory. The hierarchical likelihood-ratio test performed poorly when the true model included a proportion of invariable sites, while the Bayesian information criterion and decision theory generally exhibited similar performance to each other. Conclusions Our results indicate that the Bayesian information criterion and decision theory should be preferred for model selection. Together with model-adequacy tests, accurate model selection will serve to improve the reliability of phylogenetic inference and related analyses.

  13. Support System Model for Value based Group Decision on Roof System Selection

    Directory of Open Access Journals (Sweden)

    Christiono Utomo

    2011-02-01

    Full Text Available A group decision support system is required on a value-based decision because there are different concern caused by differing preferences, experiences, and background. It is to enable each decision-maker to evaluate and rank the solution alternatives before engaging into negotiation with other decision-makers. Stakeholder of multi-criteria decision making problems usually evaluates the alternative solution from different perspective, making it possible to have a dominant solution among the alternatives. Each stakeholder needs to identify the goals that can be optimized and those that can be compromised in order to reach an agreement with other stakeholders. This paper presents group decision model involving three decision-makers on the selection of suitable system for a building’s roof. The objective of the research is to find an agreement options model and coalition algorithms for multi person decision with two main preferences of value which are function and cost. The methodology combines value analysis method using Function Analysis System Technique (FAST; Life Cycle Cost analysis, group decision analysis method based on Analytical Hierarchy Process (AHP in a satisfying options, and Game theory-based agent system to develop agreement option and coalition formation for the support system. The support system bridges theoretical gap between automated design in construction domain and automated negotiation in information technology domain by providing a structured methodology which can lead to systematic support system and automated negotiation. It will contribute to value management body of knowledge as an advanced method for creativity and analysis phase, since the practice of this knowledge is teamwork based. In the case of roof system selection, it reveals the start of the first negotiation round. Some of the solutions are not an option because no individual stakeholder or coalition of stakeholders desires to select it. The result indicates

  14. An Empirical Study of Wrappers for Feature Subset Selection based on a Parallel Genetic Algorithm: The Multi-Wrapper Model

    KAUST Repository

    Soufan, Othman

    2012-09-01

    Feature selection is the first task of any learning approach that is applied in major fields of biomedical, bioinformatics, robotics, natural language processing and social networking. In feature subset selection problem, a search methodology with a proper criterion seeks to find the best subset of features describing data (relevance) and achieving better performance (optimality). Wrapper approaches are feature selection methods which are wrapped around a classification algorithm and use a performance measure to select the best subset of features. We analyze the proper design of the objective function for the wrapper approach and highlight an objective based on several classification algorithms. We compare the wrapper approaches to different feature selection methods based on distance and information based criteria. Significant improvement in performance, computational time, and selection of minimally sized feature subsets is achieved by combining different objectives for the wrapper model. In addition, considering various classification methods in the feature selection process could lead to a global solution of desirable characteristics.

  15. GIS based site and structure selection model for groundwater recharge: a hydrogeomorphic approach.

    Science.gov (United States)

    Vijay, Ritesh; Sohony, R A

    2009-10-01

    The groundwater in India is facing a critical situation due to over exploitation, reduction in recharge potential by change in land use and land cover and improper planning and management. A groundwater development plan needs a large volume of multidisciplinary data from various sources. A geographic information system (GIS) based hydrogeomorphic approach can provide the appropriate platform for spatial analysis of diverse data sets for decision making in groundwater recharge. The paper presents development of GIS based model to provide more accuracy in identification and suitability analysis for finding out zones and locating suitable sites with suggested structures for artificial recharge. Satellite images were used to prepare the geomorphological and land use maps. For site selection, the items such as slope, surface infiltration, and order of drainage were generated and integrated in GIS using Weighted Index Overlay Analysis and Boolean logics. Similarly for identification of suitable structures, complex matrix was programmed based on local climatic, topographic, hydrogeologic and landuse conditions as per artificial recharge manual of Central Ground Water Board, India. The GIS based algorithm is implemented in a user-friendly way using arc macro language on Arc/Info platform.

  16. Optimal Scheme Selection of Agricultural Production Structure Adjustment - Based on DEA Model; Punjab (Pakistan)

    Institute of Scientific and Technical Information of China (English)

    Zeeshan Ahmad; Meng Jun; Muhammad Abdullah; Mazhar Nadeem Ishaq; Majid Lateef; Imran Khan

    2015-01-01

    This paper used the modern evaluation method of DEA (Data Envelopment Analysis) to assess the comparative efficiency and then on the basis of this among multiple schemes chose the optimal scheme of agricultural production structure adjustment. Based on the results of DEA model, we dissected scale advantages of each discretionary scheme or plan. We examined scale advantages of each discretionary scheme, tested profoundly a definitive purpose behind not-DEA efficient, which elucidated the system and methodology to enhance these discretionary plans. At the end, another method had been proposed to rank and select the optimal scheme. The research was important to guide the practice if the modification of agricultural production industrial structure was carried on.

  17. Query Large Scale Microarray Compendium Datasets Using a Model-Based Bayesian Approach with Variable Selection

    Science.gov (United States)

    Hu, Ming; Qin, Zhaohui S.

    2009-01-01

    In microarray gene expression data analysis, it is often of interest to identify genes that share similar expression profiles with a particular gene such as a key regulatory protein. Multiple studies have been conducted using various correlation measures to identify co-expressed genes. While working well for small datasets, the heterogeneity introduced from increased sample size inevitably reduces the sensitivity and specificity of these approaches. This is because most co-expression relationships do not extend to all experimental conditions. With the rapid increase in the size of microarray datasets, identifying functionally related genes from large and diverse microarray gene expression datasets is a key challenge. We develop a model-based gene expression query algorithm built under the Bayesian model selection framework. It is capable of detecting co-expression profiles under a subset of samples/experimental conditions. In addition, it allows linearly transformed expression patterns to be recognized and is robust against sporadic outliers in the data. Both features are critically important for increasing the power of identifying co-expressed genes in large scale gene expression datasets. Our simulation studies suggest that this method outperforms existing correlation coefficients or mutual information-based query tools. When we apply this new method to the Escherichia coli microarray compendium data, it identifies a majority of known regulons as well as novel potential target genes of numerous key transcription factors. PMID:19214232

  18. A Consistent Fuzzy Preference Relations Based ANP Model for R&D Project Selection

    Directory of Open Access Journals (Sweden)

    Chia-Hua Cheng

    2017-08-01

    Full Text Available In today’s rapidly changing economy, technology companies have to make decisions on research and development (R&D projects investment on a routine bases with such decisions having a direct impact on that company’s profitability, sustainability and future growth. Companies seeking profitable opportunities for investment and project selection must consider many factors such as resource limitations and differences in assessment, with consideration of both qualitative and quantitative criteria. Often, differences in perception by the various stakeholders hinder the attainment of a consensus of opinion and coordination efforts. Thus, in this study, a hybrid model is developed for the consideration of the complex criteria taking into account the different opinions of the various stakeholders who often come from different departments within the company and have different opinions about which direction to take. The decision-making trial and evaluation laboratory (DEMATEL approach is used to convert the cause and effect relations representing the criteria into a visual network structure. A consistent fuzzy preference relations based analytic network process (CFPR-ANP method is developed to calculate the preference-weights of the criteria based on the derived network structure. The CFPR-ANP is an improvement over the original analytic network process (ANP method in that it reduces the problem of inconsistency as well as the number of pairwise comparisons. The combined complex proportional assessment (COPRAS-G method is applied with fuzzy grey relations to resolve conflicts arising from differences in information and opinions provided by the different stakeholders about the selection of the most suitable R&D projects. This novel combination approach is then used to assist an international brand-name company to prioritize projects and make project decisions that will maximize returns and ensure sustainability for the company.

  19. Accounting for selection bias in species distribution models: An econometric approach on forested trees based on structural modeling

    Science.gov (United States)

    Ay, Jean-Sauveur; Guillemot, Joannès; Martin-StPaul, Nicolas K.; Doyen, Luc; Leadley, Paul

    2015-04-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global change on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of application on forested trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8 km). We also compared the output of the SSDM with outputs of a classical SDM in term of bioclimatic response curves and potential distribution under current climate. According to the species and the spatial resolution of the calibration dataset, shapes of bioclimatic response curves the modelled species distribution maps differed markedly between the SSDM and classical SDMs. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents a crucial step to account for economic constraints on tree

  20. Recruiter Selection Model

    Science.gov (United States)

    2006-05-01

    interests include feature selection, statistical learning, multivariate statistics, market research, and classification. He may be contacted at...current youth market , and reducing barriers to Army enlistment. Part of the Army Recruiting Initiatives was the creation of a recruiter selection...Selection Model DevelPed by the Openuier Reseach Crate of E...lneSstm Erapseeeng Depce-teo, WViitd Ntt. siliec Academy, NW..t Point, 271 Weau/’itt 21M

  1. Medical Waste Disposal Method Selection Based on a Hierarchical Decision Model with Intuitionistic Fuzzy Relations

    Directory of Open Access Journals (Sweden)

    Wuyong Qian

    2016-09-01

    Full Text Available Although medical waste usually accounts for a small fraction of urban municipal waste, its proper disposal has been a challenging issue as it often contains infectious, radioactive, or hazardous waste. This article proposes a two-level hierarchical multicriteria decision model to address medical waste disposal method selection (MWDMS, where disposal methods are assessed against different criteria as intuitionistic fuzzy preference relations and criteria weights are furnished as real values. This paper first introduces new operations for a special class of intuitionistic fuzzy values, whose membership and non-membership information is cross ratio based ]0, 1[-values. New score and accuracy functions are defined in order to develop a comparison approach for ]0, 1[-valued intuitionistic fuzzy numbers. A weighted geometric operator is then put forward to aggregate a collection of ]0, 1[-valued intuitionistic fuzzy values. Similar to Saaty’s 1–9 scale, this paper proposes a cross-ratio-based bipolar 0.1–0.9 scale to characterize pairwise comparison results. Subsequently, a two-level hierarchical structure is formulated to handle multicriteria decision problems with intuitionistic preference relations. Finally, the proposed decision framework is applied to MWDMS to illustrate its feasibility and effectiveness.

  2. Medical Waste Disposal Method Selection Based on a Hierarchical Decision Model with Intuitionistic Fuzzy Relations.

    Science.gov (United States)

    Qian, Wuyong; Wang, Zhou-Jing; Li, Kevin W

    2016-09-09

    Although medical waste usually accounts for a small fraction of urban municipal waste, its proper disposal has been a challenging issue as it often contains infectious, radioactive, or hazardous waste. This article proposes a two-level hierarchical multicriteria decision model to address medical waste disposal method selection (MWDMS), where disposal methods are assessed against different criteria as intuitionistic fuzzy preference relations and criteria weights are furnished as real values. This paper first introduces new operations for a special class of intuitionistic fuzzy values, whose membership and non-membership information is cross ratio based ]0, 1[-values. New score and accuracy functions are defined in order to develop a comparison approach for ]0, 1[-valued intuitionistic fuzzy numbers. A weighted geometric operator is then put forward to aggregate a collection of ]0, 1[-valued intuitionistic fuzzy values. Similar to Saaty's 1-9 scale, this paper proposes a cross-ratio-based bipolar 0.1-0.9 scale to characterize pairwise comparison results. Subsequently, a two-level hierarchical structure is formulated to handle multicriteria decision problems with intuitionistic preference relations. Finally, the proposed decision framework is applied to MWDMS to illustrate its feasibility and effectiveness.

  3. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with

  4. Agent-based modelling of nest site selection by bees and ants

    OpenAIRE

    Bolis, Reem

    2016-01-01

    In this study we develop an agent-based time discrete model to show how honeybees and ants are able to choose the best site available under different circumstances. We focus on the agent-based model introduced by Christian List, Christian Elsholtz and Thomas Seeley in their paper: Independence and interdependence in collective decision making: an agent-based model of nest-site choice by honeybee swarms. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 364(1518...

  5. A Collective Case Study of Secondary Students' Model-Based Inquiry on Natural Selection through Programming in an Agent-Based Modeling Environment

    Science.gov (United States)

    Xiang, Lin

    This is a collective case study seeking to develop detailed descriptions of how programming an agent-based simulation influences a group of 8 th grade students' model-based inquiry (MBI) by examining students' agent-based programmable modeling (ABPM) processes and the learning outcomes. The context of the present study was a biology unit on natural selection implemented in a charter school of a major California city during spring semester of 2009. Eight 8th grade students, two boys and six girls, participated in this study. All of them were low socioeconomic status (SES). English was a second language for all of them, but they had been identified as fluent English speakers at least a year before the study. None of them had learned either natural selection or programming before the study. The study spanned over 7 weeks and was comprised of two study phases. In phase one the subject students learned natural selection in science classroom and how to do programming in NetLogo, an ABPM tool, in a computer lab; in phase two, the subject students were asked to program a simulation of adaptation based on the natural selection model in NetLogo. Both qualitative and quantitative data were collected in this study. The data resources included (1) pre and post test questionnaire, (2) student in-class worksheet, (3) programming planning sheet, (4) code-conception matching sheet, (5) student NetLogo projects, (6) videotaped programming processes, (7) final interview, and (8) investigator's field notes. Both qualitative and quantitative approaches were applied to analyze the gathered data. The findings suggested that students made progress on understanding adaptation phenomena and natural selection at the end of ABPM-supported MBI learning but the progress was limited. These students still held some misconceptions in their conceptual models, such as the idea that animals need to "learn" to adapt into the environment. Besides, their models of natural selection appeared to be

  6. Dopamine selectively remediates 'model-based' reward learning: a computational approach.

    Science.gov (United States)

    Sharp, Madeleine E; Foerde, Karin; Daw, Nathaniel D; Shohamy, Daphna

    2016-02-01

    Patients with loss of dopamine due to Parkinson's disease are impaired at learning from reward. However, it remains unknown precisely which aspect of learning is impaired. In particular, learning from reward, or reinforcement learning, can be driven by two distinct computational processes. One involves habitual stamping-in of stimulus-response associations, hypothesized to arise computationally from 'model-free' learning. The other, 'model-based' learning, involves learning a model of the world that is believed to support goal-directed behaviour. Much work has pointed to a role for dopamine in model-free learning. But recent work suggests model-based learning may also involve dopamine modulation, raising the possibility that model-based learning may contribute to the learning impairment in Parkinson's disease. To directly test this, we used a two-step reward-learning task which dissociates model-free versus model-based learning. We evaluated learning in patients with Parkinson's disease tested ON versus OFF their dopamine replacement medication and in healthy controls. Surprisingly, we found no effect of disease or medication on model-free learning. Instead, we found that patients tested OFF medication showed a marked impairment in model-based learning, and that this impairment was remediated by dopaminergic medication. Moreover, model-based learning was positively correlated with a separate measure of working memory performance, raising the possibility of common neural substrates. Our results suggest that some learning deficits in Parkinson's disease may be related to an inability to pursue reward based on complete representations of the environment.

  7. Hybrid model based on Genetic Algorithms and SVM applied to variable selection within fruit juice classification.

    Science.gov (United States)

    Fernandez-Lozano, C; Canto, C; Gestal, M; Andrade-Garda, J M; Rabuñal, J R; Dorado, J; Pazos, A

    2013-01-01

    Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected.

  8. The Effect of Educational Intervention on Selection of Delivery Method Based on Health Belief Model.

    Directory of Open Access Journals (Sweden)

    Mohammad Hossein Baghianimoghadam

    2014-07-01

    Discussion- In this study educational intervention based on health belief model increased the awareness of pregnant women, However ,it has not been effective on their performance. Because many factors other than knowledge are involved in the choice of delivery method, It is proposed to enhance the efficiency of this model simultaneously different patterns that can be used effectively on other factors.

  9. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.

    2012-03-01

    Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.

  10. TU-CD-BRA-05: Atlas Selection for Multi-Atlas-Based Image Segmentation Using Surrogate Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, T; Ruan, D [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: The growing size and heterogeneity in training atlas necessitates sophisticated schemes to identify only the most relevant atlases for the specific multi-atlas-based image segmentation problem. This study aims to develop a model to infer the inaccessible oracle geometric relevance metric from surrogate image similarity metrics, and based on such model, provide guidance to atlas selection in multi-atlas-based image segmentation. Methods: We relate the oracle geometric relevance metric in label space to the surrogate metric in image space, by a monotonically non-decreasing function with additive random perturbations. Subsequently, a surrogate’s ability to prognosticate the oracle order for atlas subset selection is quantified probabilistically. Finally, important insights and guidance are provided for the design of fusion set size, balancing the competing demands to include the most relevant atlases and to exclude the most irrelevant ones. A systematic solution is derived based on an optimization framework. Model verification and performance assessment is performed based on clinical prostate MR images. Results: The proposed surrogate model was exemplified by a linear map with normally distributed perturbation, and verified with several commonly-used surrogates, including MSD, NCC and (N)MI. The derived behaviors of different surrogates in atlas selection and their corresponding performance in ultimate label estimate were validated. The performance of NCC and (N)MI was similarly superior to MSD, with a 10% higher atlas selection probability and a segmentation performance increase in DSC by 0.10 with the first and third quartiles of (0.83, 0.89), compared to (0.81, 0.89). The derived optimal fusion set size, valued at 7/8/8/7 for MSD/NCC/MI/NMI, agreed well with the appropriate range [4, 9] from empirical observation. Conclusion: This work has developed an efficacious probabilistic model to characterize the image-based surrogate metric on atlas selection

  11. A Data Mining-Based Response Model for Target Selection in Direct Marketing

    Directory of Open Access Journals (Sweden)

    Eniafe Festus Ayetiran

    2012-02-01

    Full Text Available Identifying customers who are more likely to respond to new product offers is an important issue in direct marketing. In direct marketing, data mining has been used extensively to identify potential customers for a new product (target selection. Using historical purchase data, a predictive response model with data mining techniques was developed to predict a probability that a customer in Ebedi Microfinance bank will respond to a promotion or an offer. To achieve this purpose, a predictive response model using customers’ historical purchase data was built with data mining techniques. The data were stored in a data warehouse to serve as management decision support system. The response model was built from customers’ historic purchases and demographic dataset.Bayesian algorithm precisely Naïve Bayes algorithm was employed in constructing the classifier system. Both filter and wrapper feature selection techniques were employed in determining inputs to the model.The results obtained shows that Ebedi Microfinance bank can plan effective marketing of their products and services by obtaining a guiding report on the status of their customers which will go a long way in assisting management in saving significant amount of money that could have been spent on wasteful promotional campaigns.

  12. A review of selected topics in physics based modeling for tunnel field-effect transistors

    Science.gov (United States)

    Esseni, David; Pala, Marco; Palestri, Pierpaolo; Alper, Cem; Rollo, Tommaso

    2017-08-01

    The research field on tunnel-FETs (TFETs) has been rapidly developing in the last ten years, driven by the quest for a new electronic switch operating at a supply voltage well below 1 V and thus delivering substantial improvements in the energy efficiency of integrated circuits. This paper reviews several aspects related to physics based modeling in TFETs, and shows how the description of these transistors implies a remarkable innovation and poses new challenges compared to conventional MOSFETs. A hierarchy of numerical models exist for TFETs covering a wide range of predictive capabilities and computational complexities. We start by reviewing seminal contributions on direct and indirect band-to-band tunneling (BTBT) modeling in semiconductors, from which most TCAD models have been actually derived. Then we move to the features and limitations of TCAD models themselves and to the discussion of what we define non-self-consistent quantum models, where BTBT is computed with rigorous quantum-mechanical models starting from frozen potential profiles and closed-boundary Schrödinger equation problems. We will then address models that solve the open-boundary Schrödinger equation problem, based either on the non-equilibrium Green’s function NEGF or on the quantum-transmitting-boundary formalism, and show how the computational burden of these models may vary in a wide range depending on the Hamiltonian employed in the calculations. A specific section is devoted to TFETs based on 2D crystals and van der Waals hetero-structures. The main goal of this paper is to provide the reader with an introduction to the most important physics based models for TFETs, and with a possible guidance to the wide and rapidly developing literature in this exciting research field.

  13. Agricultural Production Structure Adjustment Scheme Evaluation and Selection Based on DEA Model for Punjab (Pakistan)

    Institute of Scientific and Technical Information of China (English)

    Zeeshan Ahmad; Meng Jun

    2015-01-01

    DEA is a nonparametric method used in operation researches and economics fields for the evaluation of the production frontier. It has distinct intrinsic which is worth coping with assessment problems with multiple inputs in particular with multiple outputs. This paper usedDεC2R model of DEA to assess the comparative efficiency of the multiple schemes of agricultural industrial structure, at the end we chose the most favorable also known as "OPTIMAL" scheme. In addition to this, using some functional insights from DEA model non optimal schemes or less optimal schemes had also been improved to some extent. Assessment and selection of optimal schemes of agricultural industrial structure using DEA model gave a greater and better insight of agricultural industrial structure and was the first of such researches in Pakistan.

  14. A Model-Selection-Based Self-Splitting Gaussian Mixture Learning with Application to Speaker Identification

    Directory of Open Access Journals (Sweden)

    Shih-Sian Cheng

    2004-12-01

    Full Text Available We propose a self-splitting Gaussian mixture learning (SGML algorithm for Gaussian mixture modelling. The SGML algorithm is deterministic and is able to find an appropriate number of components of the Gaussian mixture model (GMM based on a self-splitting validity measure, Bayesian information criterion (BIC. It starts with a single component in the feature space and splits adaptively during the learning process until the most appropriate number of components is found. The SGML algorithm also performs well in learning the GMM with a given component number. In our experiments on clustering of a synthetic data set and the text-independent speaker identification task, we have observed the ability of the SGML for model-based clustering and automatically determining the model complexity of the speaker GMMs for speaker identification.

  15. Introduction. Modelling natural action selection.

    Science.gov (United States)

    Prescott, Tony J; Bryson, Joanna J; Seth, Anil K

    2007-09-29

    Action selection is the task of resolving conflicts between competing behavioural alternatives. This theme issue is dedicated to advancing our understanding of the behavioural patterns and neural substrates supporting action selection in animals, including humans. The scope of problems investigated includes: (i) whether biological action selection is optimal (and, if so, what is optimized), (ii) the neural substrates for action selection in the vertebrate brain, (iii) the role of perceptual selection in decision-making, and (iv) the interaction of group and individual action selection. A second aim of this issue is to advance methodological practice with respect to modelling natural action section. A wide variety of computational modelling techniques are therefore employed ranging from formal mathematical approaches through to computational neuroscience, connectionism and agent-based modelling. The research described has broad implications for both natural and artificial sciences. One example, highlighted here, is its application to medical science where models of the neural substrates for action selection are contributing to the understanding of brain disorders such as Parkinson's disease, schizophrenia and attention deficit/hyperactivity disorder.

  16. A Modified Feature Selection and Artificial Neural Network-Based Day-Ahead Load Forecasting Model for a Smart Grid

    Directory of Open Access Journals (Sweden)

    Ashfaq Ahmad

    2015-12-01

    Full Text Available In the operation of a smart grid (SG, day-ahead load forecasting (DLF is an important task. The SG can enhance the management of its conventional and renewable resources with a more accurate DLF model. However, DLF model development is highly challenging due to the non-linear characteristics of load time series in SGs. In the literature, DLF models do exist; however, these models trade off between execution time and forecast accuracy. The newly-proposed DLF model will be able to accurately predict the load of the next day with a fair enough execution time. Our proposed model consists of three modules; the data preparation module, feature selection and the forecast module. The first module makes the historical load curve compatible with the feature selection module. The second module removes redundant and irrelevant features from the input data. The third module, which consists of an artificial neural network (ANN, predicts future load on the basis of selected features. Moreover, the forecast module uses a sigmoid function for activation and a multi-variate auto-regressive model for weight updating during the training process. Simulations are conducted in MATLAB to validate the performance of our newly-proposed DLF model in terms of accuracy and execution time. Results show that our proposed modified feature selection and modified ANN (m(FS + ANN-based model for SGs is able to capture the non-linearity(ies in the history load curve with 97 . 11 % accuracy. Moreover, this accuracy is achieved at the cost of a fair enough execution time, i.e., we have decreased the average execution time of the existing FS + ANN-based model by 38 . 50 % .

  17. A quantitative quasispecies theory-based model of virus escape mutation under immune selection.

    Science.gov (United States)

    Woo, Hyung-June; Reifman, Jaques

    2012-08-07

    Viral infections involve a complex interplay of the immune response and escape mutation of the virus quasispecies inside a single host. Although fundamental aspects of such a balance of mutation and selection pressure have been established by the quasispecies theory decades ago, its implications have largely remained qualitative. Here, we present a quantitative approach to model the virus evolution under cytotoxic T-lymphocyte immune response. The virus quasispecies dynamics are explicitly represented by mutations in the combined sequence space of a set of epitopes within the viral genome. We stochastically simulated the growth of a viral population originating from a single wild-type founder virus and its recognition and clearance by the immune response, as well as the expansion of its genetic diversity. Applied to the immune escape of a simian immunodeficiency virus epitope, model predictions were quantitatively comparable to the experimental data. Within the model parameter space, we found two qualitatively different regimes of infectious disease pathogenesis, each representing alternative fates of the immune response: It can clear the infection in finite time or eventually be overwhelmed by viral growth and escape mutation. The latter regime exhibits the characteristic disease progression pattern of human immunodeficiency virus, while the former is bounded by maximum mutation rates that can be suppressed by the immune response. Our results demonstrate that, by explicitly representing epitope mutations and thus providing a genotype-phenotype map, the quasispecies theory can form the basis of a detailed sequence-specific model of real-world viral pathogens evolving under immune selection.

  18. Location Selection of Chinese Modern Railway Logistics Center Based on DEA-Bi-level Programming Model

    Directory of Open Access Journals (Sweden)

    Fenling Feng

    2013-06-01

    Full Text Available Properly planning the modern railway logistics center is a necessary step for the railway logistics operation, which can effectively improve the railway freight service for a seamless connection between the internal and external logistic nodes. The study, from the medium level and depending on the existing railway freight stations with the railway logistics node city, focuses on the site-selection of modern railway logistics center to realize organic combination between newly built railway logistics center and existing resources. Considering the special features of modern railway logistics center, the study makes pre-selection of the existing freight stations with the DEA assessment model to get the alternative plan. And further builds a Bi-level plan model with the gross construction costs and total client expenses minimized. Finally, the example shows that the hybrid optimization algorithm combined with GA, TA, SA can solve the Bi-level programming which is a NP-hard problem and get the railway logistics center number and distribution. The result proves that our method has profound realistic significance to the development of China railway logistics.

  19. Bayesian methods for quantitative trait loci mapping based on model selection: approximate analysis using the Bayesian information criterion.

    Science.gov (United States)

    Ball, R D

    2001-11-01

    We describe an approximate method for the analysis of quantitative trait loci (QTL) based on model selection from multiple regression models with trait values regressed on marker genotypes, using a modification of the easily calculated Bayesian information criterion to estimate the posterior probability of models with various subsets of markers as variables. The BIC-delta criterion, with the parameter delta increasing the penalty for additional variables in a model, is further modified to incorporate prior information, and missing values are handled by multiple imputation. Marginal probabilities for model sizes are calculated, and the posterior probability of nonzero model size is interpreted as the posterior probability of existence of a QTL linked to one or more markers. The method is demonstrated on analysis of associations between wood density and markers on two linkage groups in Pinus radiata. Selection bias, which is the bias that results from using the same data to both select the variables in a model and estimate the coefficients, is shown to be a problem for commonly used non-Bayesian methods for QTL mapping, which do not average over alternative possible models that are consistent with the data.

  20. Variable Selection and Updating In Model-Based Discriminant Analysis for High Dimensional Data with Food Authenticity Applications.

    Science.gov (United States)

    Murphy, Thomas Brendan; Dean, Nema; Raftery, Adrian E

    2010-03-01

    Food authenticity studies are concerned with determining if food samples have been correctly labelled or not. Discriminant analysis methods are an integral part of the methodology for food authentication. Motivated by food authenticity applications, a model-based discriminant analysis method that includes variable selection is presented. The discriminant analysis model is fitted in a semi-supervised manner using both labeled and unlabeled data. The method is shown to give excellent classification performance on several high-dimensional multiclass food authenticity datasets with more variables than observations. The variables selected by the proposed method provide information about which variables are meaningful for classification purposes. A headlong search strategy for variable selection is shown to be efficient in terms of computation and achieves excellent classification performance. In applications to several food authenticity datasets, our proposed method outperformed default implementations of Random Forests, AdaBoost, transductive SVMs and Bayesian Multinomial Regression by substantial margins.

  1. Selective source blocking for Gamma Knife radiosurgery of trigeminal neuralgia based on analytical dose modelling

    Science.gov (United States)

    Li, Kaile; Ma, Lijun

    2004-08-01

    We have developed an automatic critical region shielding (ACRS) algorithm for Gamma Knife radiosurgery of trigeminal neuralgia. The algorithm selectively blocks 201 Gamma Knife sources to minimize the dose to the brainstem while irradiating the root entry area of the trigeminal nerve with 70-90 Gy. An independent dose model was developed to implement the algorithm. The accuracy of the dose model was tested and validated via comparison with the Leksell GammaPlan (LGP) calculations. Agreements of 3% or 3 mm in isodose distributions were found for both single-shot and multiple-shot treatment plans. After the optimized blocking patterns are obtained via the independent dose model, they are imported into the LGP for final dose calculations and treatment planning analyses. We found that the use of a moderate number of source plugs (30-50 plugs) significantly lowered (~40%) the dose to the brainstem for trigeminal neuralgia treatments. Considering the small effort involved in using these plugs, we recommend source blocking for all trigeminal neuralgia treatments with Gamma Knife radiosurgery.

  2. Selective source blocking for Gamma Knife radiosurgery of trigeminal neuralgia based on analytical dose modelling

    Energy Technology Data Exchange (ETDEWEB)

    Li Kaile; Ma Lijun [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD 21210 (United States)

    2004-08-07

    We have developed an automatic critical region shielding (ACRS) algorithm for Gamma Knife radiosurgery of trigeminal neuralgia. The algorithm selectively blocks 201 Gamma Knife sources to minimize the dose to the brainstem while irradiating the root entry area of the trigeminal nerve with 70-90 Gy. An independent dose model was developed to implement the algorithm. The accuracy of the dose model was tested and validated via comparison with the Leksell GammaPlan (LGP) calculations. Agreements of 3% or 3 mm in isodose distributions were found for both single-shot and multiple-shot treatment plans. After the optimized blocking patterns are obtained via the independent dose model, they are imported into the LGP for final dose calculations and treatment planning analyses. We found that the use of a moderate number of source plugs (30-50 plugs) significantly lowered ({approx}40%) the dose to the brainstem for trigeminal neuralgia treatments. Considering the small effort involved in using these plugs, we recommend source blocking for all trigeminal neuralgia treatments with Gamma Knife radiosurgery.

  3. Toward understanding the selective anticancer capacity of cold atmospheric plasma--a model based on aquaporins (Review).

    Science.gov (United States)

    Yan, Dayun; Talbot, Annie; Nourmohammadi, Niki; Sherman, Jonathan H; Cheng, Xiaoqian; Keidar, Michael

    2015-01-01

    Selectively treating tumor cells is the ongoing challenge of modern cancer therapy. Recently, cold atmospheric plasma (CAP), a near room-temperature ionized gas, has been demonstrated to exhibit selective anticancer behavior. However, the mechanism governing such selectivity is still largely unknown. In this review, the authors first summarize the progress that has been made applying CAP as a selective tool for cancer treatment. Then, the key role of aquaporins in the H2O2 transmembrane diffusion is discussed. Finally, a novel model, based on the expression of aquaporins, is proposed to explain why cancer cells respond to CAP treatment with a greater rise in reactive oxygen species than homologous normal cells. Cancer cells tend to express more aquaporins on their cytoplasmic membranes, which may cause the H2O2 uptake speed in cancer cells to be faster than in normal cells. As a result, CAP treatment kills cancer cells more easily than normal cells. Our preliminary observations indicated that glioblastoma cells consumed H2O2 much faster than did astrocytes in either the CAP-treated or H2O2-rich media, which supported the selective model based on aquaporins.

  4. The Effectiveness of the Discriminant Analysis Models. Study Based on the Selected Polish Companies Quoted on the Warsaw Stock Exchange

    Directory of Open Access Journals (Sweden)

    Grzegorz Gołębiowski

    2008-07-01

    Full Text Available The article presents the result of research on en effectiveness of discriminant models on the example of selected Polish joint-stock companies which declared bankruptcy. Aside from general results describing an effectiveness of discriminant models on the base of the own research of the authors, a comparison of the received findings with others economists results of research was made. Moreover, an analysis how the relation of an enterprise with the economic situation impacts on an effectiveness of considered models concerning the forecast of a risk of bankruptcy was carried out.

  5. Handling equipment Selection in open pit mines by using an integrated model based on group decision making

    Directory of Open Access Journals (Sweden)

    Abdolreza Yazdani-Chamzini

    2012-10-01

    Full Text Available Process of handling equipment selection is one of the most important and basic parts in the project planning, particularly mining projects due to holding a high charge of the total project's cost. Different criteria impact on the handling equipment selection, while these criteria often are in conflicting with each other. Therefore, the process of handling equipment selection is a complex and multi criteria decision making problem. There are a variety of methods for selecting the most appropriate equipment among a set of alternatives. Likewise, according to the sophisticated structure of the problem, imprecise data, less of information, and inherent uncertainty, the usage of the fuzzy sets can be useful. In this study a new integrated model based on fuzzy analytic hierarchy process (FAHP and fuzzy technique for order preference by similarity to ideal solution (FTOPSIS is proposed, which uses group decision making to reduce individual errors. In order to calculate the weights of the evaluation criteria, FAHP is utilized in the process of handling equipment selection, and then these weights are inserted to the FTOPSIS computations to select the most appropriate handling system among a pool of alternatives. The results of this study demonstrate the potential application and effectiveness of the proposed model, which can be applied to different types of sophisticated problems in real problems.

  6. An intelligent modeling method based on genetic algorithm for partner selection in virtual organizations

    Directory of Open Access Journals (Sweden)

    Pacuraru Raluca

    2011-04-01

    Full Text Available The goal of a Virtual Organization is to find the most appropriate partners in terms of expertise, cost wise, quick response, and environment. In this study we propose a model and a solution approach to a partner selection problem considering three main evaluation criteria: cost, time and risk. This multiobjective problem is solved by an improved genetic algorithm (GA that includes meiosis specific characteristics and step-size adaptation for the mutation operator. The algorithm performs strong exploration initially and exploitation in later generations. It has a high global search ability and a fast convergence rate and also avoids premature convergence. On the basis of the numerical investigations, the incorporation of the proposed enhancements has been successfully proved.

  7. MicroRNA-Based Therapy in Animal Models of Selected Gastrointestinal Cancers

    OpenAIRE

    2016-01-01

    Gastrointestinal cancer accounts for the 20 most frequent cancer diseases worldwide and there is a constant urge to bring new therapeutics with new mechanism of action into the clinical practice. Quantity of in vitro and in vivo evidences indicate, that exogenous change in pathologically imbalanced microRNAs (miRNAs) is capable of transforming the cancer cell phenotype. This review analyzed preclinical miRNA-based therapy attempts in animal models of gastric, pancreatic, gallbladder, and colo...

  8. Partial imputation to improve predictive modelling in insurance risk classification using a hybrid positive selection algorithm and correlation-based feature selection

    CSIR Research Space (South Africa)

    Duma, M

    2013-09-01

    Full Text Available We propose a hybrid missing data imputation technique using positive selection and correlation-based feature selection for insurance data. The hybrid is used to help supervised learning methods improve their classification accuracy and resilience...

  9. [Role of neoadjuvant radiotherapy for rectal cancer : Is MRI-based selection a future model?].

    Science.gov (United States)

    Kulu, Y; Hackert, T; Debus, J; Weber, M-A; Büchler, M W; Ulrich, A

    2016-07-01

    Following the introduction of total mesorectal excision (TME) in the curative treatment of rectal cancer, the role of neoadjuvant therapy has evolved. By improving the surgical technique the local recurrence rate could be reduced by TME surgery alone to below 8 %. Even if local control was further improved by additional preoperative irradiation this did not lead to a general survival benefit. Guidelines advocate that all patients in UICC stage II and III should be pretreated; however, the stage-based indications for neoadjuvant therapy have limitations. This is mainly attributable to the facts that patients with T3 tumors comprise a very heterogeneous prognostic group and preoperative lymph node diagnostics lack accuracy. In contrast, in recent years the circumferential resection margin (CRM) has become an important prognostic parameter. Patients with tumors that are very close to or infiltrate the pelvic fascia (positive CRM) have a higher rate of local recurrence and poorer survival. With high-resolution pelvic magnetic resonance imaging (MRI) examination in patients with rectal cancer, the preoperative CRM can be determined with a high sensitivity and specificity. Improved T staging and better prediction of the resection margins by pelvic MRI potentially facilitate the selection of patients for study-based treatment strategies omitting neoadjuvant radiotherapy.

  10. Automatic feature selection for model-based reinforcement learning in factored MDPs

    NARCIS (Netherlands)

    Kroon, M.; Whiteson, S.; Wani, M.A.; Kantardzic, M.; Palade, V.; Kurgan, L.; Qi, A.

    2009-01-01

    Feature selection is an important challenge in machine learning. Unfortunately, most methods for automating feature selection are designed for supervised learning tasks and are thus either inapplicable or impractical for reinforcement learning. This paper presents a new approach to feature selection

  11. MicroRNA-based Therapy in Animal Models of Selected Gastrointestinal Cancers

    Directory of Open Access Journals (Sweden)

    Jana Merhautova

    2016-09-01

    Full Text Available Gastrointestinal cancer accounts for the 20 most frequent cancer diseases worldwide and there is a constant urge to bring new therapeutics with new mechanism of action into the clinical practice. Quantity of in vitro and in vivo evidences indicate, that exogenous change in pathologically imbalanced microRNAs (miRNAs is capable of transforming the cancer cell phenotype. This review analyzed preclinical miRNA-based therapy attempts in animal models of gastric, pancreatic, gallbladder, and colorectal cancer. From more than 400 original articles, 26 was found to assess the effect of miRNA mimics, precursors, expression vectors, or inhibitors administered locally or systemically being an approach with relatively high translational potential. We have focused on mapping available information on animal model used (animal strain, cell line, xenograft method, pharmacological aspects (oligonucleotide chemistry, delivery system, posology, route of administration and toxicology assessments. We also summarize findings in the field pharmacokinetics and toxicity of miRNA-based therapy.□

  12. Statin Selection in Qatar Based on Multi-indication Pharmacotherapeutic Multi-criteria Scoring Model, and Clinician Preference.

    Science.gov (United States)

    Al-Badriyeh, Daoud; Fahey, Michael; Alabbadi, Ibrahim; Al-Khal, Abdullatif; Zaidan, Manal

    2015-12-01

    Statin selection for the largest hospital formulary in Qatar is not systematic, not comparative, and does not consider the multi-indication nature of statins. There are no reports in the literature of multi-indication-based comparative scoring models of statins or of statin selection criteria weights that are based primarily on local clinicians' preferences and experiences. This study sought to comparatively evaluate statins for first-line therapy in Qatar, and to quantify the economic impact of this. An evidence-based, multi-indication, multi-criteria pharmacotherapeutic model was developed for the scoring of statins from the perspective of the main health care provider in Qatar. The literature and an expert panel informed the selection criteria of statins. Relative weighting of selection criteria was based on the input of the relevant local clinician population. Statins were comparatively scored based on literature evidence, with those exceeding a defined scoring threshold being recommended for use. With 95% CI and 5% margin of error, the scoring model was successfully developed. Selection criteria comprised 28 subcriteria under the following main criteria: clinical efficacy, best publish evidence and experience, adverse effects, drug interaction, dosing time, and fixed dose combination availability. Outcome measures for multiple indications were related to effects on LDL cholesterol, HDL cholesterol, triglyceride, total cholesterol, and C-reactive protein. Atorvastatin, pravastatin, and rosuvastatin exceeded defined pharmacotherapeutic thresholds. Atorvastatin and pravastatin were recommended as first-line use and rosuvastatin as a nonformulary alternative. It was estimated that this would produce a 17.6% cost savings in statins expenditure. Sensitivity analyses confirmed the robustness of the evaluation's outcomes against input uncertainties. Incorporating a comparative evaluation of statins in Qatari practices based on a locally developed, transparent, multi

  13. A Cercla-Based Decision Model to Support Remedy Selection for an Uncertain Volume of Contaminants at a DOE Facility

    Energy Technology Data Exchange (ETDEWEB)

    Christine E. Kerschus

    1999-03-31

    The Paducah Gaseous Diffusion Plant (PGDP) operated by the Department of Energy is challenged with selecting the appropriate remediation technology to cleanup contaminants at Waste Area Group (WAG) 6. This research utilizes value-focused thinking and multiattribute preference theory concepts to produce a decision analysis model designed to aid the decision makers in their selection process. The model is based on CERCLA's five primary balancing criteria, tailored specifically to WAG 6 and the contaminants of concern, utilizes expert opinion and the best available engineering, cost, and performance data, and accounts for uncertainty in contaminant volume. The model ranks 23 remediation technologies (trains) in their ability to achieve the CERCLA criteria at various contaminant volumes. A sensitivity analysis is performed to examine the effects of changes in expert opinion and uncertainty in volume. Further analysis reveals how volume uncertainty is expected to affect technology cost, time and ability to meet the CERCLA criteria. The model provides the decision makers with a CERCLA-based decision analysis methodology that is objective, traceable, and robust to support the WAG 6 Feasibility Study. In addition, the model can be adjusted to address other DOE contaminated sites.

  14. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection.

    Directory of Open Access Journals (Sweden)

    Su Yang

    Full Text Available Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1 Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2 The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3 The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.

  15. Gis-Based Wind Farm Site Selection Model Offshore Abu Dhabi Emirate, Uae

    Science.gov (United States)

    Saleous, N.; Issa, S.; Mazrouei, J. Al

    2016-06-01

    The United Arab Emirates (UAE) government has declared the increased use of alternative energy a strategic goal and has invested in identifying and developing various sources of such energy. This study aimed at assessing the viability of establishing wind farms offshore the Emirate of Abu Dhabi, UAE and to identify favourable sites for such farms using Geographic Information Systems (GIS) procedures and algorithms. Based on previous studies and on local requirements, a set of suitability criteria was developed including ocean currents, reserved areas, seabed topography, and wind speed. GIS layers were created and a weighted overlay GIS model based on the above mentioned criteria was built to identify suitable sites for hosting a new offshore wind energy farm. Results showed that most of Abu Dhabi offshore areas were unsuitable, largely due to the presence of restricted zones (marine protected areas, oil extraction platforms and oil pipelines in particular). However, some suitable sites could be identified, especially around Delma Island and North of Jabal Barakah in the Western Region. The environmental impact of potential wind farm locations and associated cables on the marine ecology was examined to ensure minimal disturbance to marine life. Further research is needed to specify wind mills characteristics that suit the study area especially with the presence of heavy traffic due to many oil production and shipping activities in the Arabian Gulf most of the year.

  16. Optimal Site Selection of Electric Vehicle Charging Stations Based on a Cloud Model and the PROMETHEE Method

    Directory of Open Access Journals (Sweden)

    Yunna Wu

    2016-03-01

    Full Text Available The task of site selection for electric vehicle charging stations (EVCS is hugely important from the perspective of harmonious and sustainable development. However, flaws and inadequacies in the currently used multi-criteria decision making methods could result in inaccurate and irrational decision results. First of all, the uncertainty of the information cannot be described integrally in the evaluation of the EVCS site selection. Secondly, rigorous consideration of the mutual influence between the various criteria is lacking, which is mainly evidenced in two aspects: one is ignoring the correlation, and the other is the unconscionable measurements. Last but not least, the ranking method adopted in previous studies is not very appropriate for evaluating the EVCS site selection problem. As a result of the above analysis, a Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE method-based decision system combined with the cloud model is proposed in this paper for EVCS site selection. Firstly, the use of the PROMETHEE method can bolster the confidence and visibility for decision makers. Secondly, the cloud model is recommended to describe the fuzziness and randomness of linguistic terms integrally and accurately. Finally, the Analytical Network Process (ANP method is adopted to measure the correlation of the indicators with a greatly simplified calculation of the parameters and the steps required.

  17. Complexity regularized hydrological model selection

    NARCIS (Netherlands)

    Pande, S.; Arkesteijn, L.; Bastidas, L.A.

    2014-01-01

    This paper uses a recently proposed measure of hydrological model complexity in a model selection exercise. It demonstrates that a robust hydrological model is selected by penalizing model complexity while maximizing a model performance measure. This especially holds when limited data is available.

  18. Complexity regularized hydrological model selection

    NARCIS (Netherlands)

    Pande, S.; Arkesteijn, L.; Bastidas, L.A.

    2014-01-01

    This paper uses a recently proposed measure of hydrological model complexity in a model selection exercise. It demonstrates that a robust hydrological model is selected by penalizing model complexity while maximizing a model performance measure. This especially holds when limited data is available.

  19. Individual Influence on Model Selection

    Science.gov (United States)

    Sterba, Sonya K.; Pek, Jolynn

    2012-01-01

    Researchers in psychology are increasingly using model selection strategies to decide among competing models, rather than evaluating the fit of a given model in isolation. However, such interest in model selection outpaces an awareness that one or a few cases can have disproportionate impact on the model ranking. Though case influence on the fit…

  20. Launch vehicle selection model

    Science.gov (United States)

    Montoya, Alex J.

    1990-01-01

    Over the next 50 years, humans will be heading for the Moon and Mars to build scientific bases to gain further knowledge about the universe and to develop rewarding space activities. These large scale projects will last many years and will require large amounts of mass to be delivered to Low Earth Orbit (LEO). It will take a great deal of planning to complete these missions in an efficient manner. The planning of a future Heavy Lift Launch Vehicle (HLLV) will significantly impact the overall multi-year launching cost for the vehicle fleet depending upon when the HLLV will be ready for use. It is desirable to develop a model in which many trade studies can be performed. In one sample multi-year space program analysis, the total launch vehicle cost of implementing the program reduced from 50 percent to 25 percent. This indicates how critical it is to reduce space logistics costs. A linear programming model has been developed to answer such questions. The model is now in its second phase of development, and this paper will address the capabilities of the model and its intended uses. The main emphasis over the past year was to make the model user friendly and to incorporate additional realistic constraints that are difficult to represent mathematically. We have developed a methodology in which the user has to be knowledgeable about the mission model and the requirements of the payloads. We have found a representation that will cut down the solution space of the problem by inserting some preliminary tests to eliminate some infeasible vehicle solutions. The paper will address the handling of these additional constraints and the methodology for incorporating new costing information utilizing learning curve theory. The paper will review several test cases that will explore the preferred vehicle characteristics and the preferred period of construction, i.e., within the next decade, or in the first decade of the next century. Finally, the paper will explore the interaction

  1. Peer selection and influence effects on adolescent alcohol use: a stochastic actor-based model

    OpenAIRE

    2012-01-01

    Abstract Background Early adolescent alcohol use is a major public health challenge. Without clear guidance on the causal pathways between peers and alcohol use, adolescent alcohol interventions may be incomplete. The objective of this study is to disentangle selection and influence effects associated with the dynamic interplay of adolescent friendships and alcohol use. Methods The study analyzes data from Add Health, a longitudinal survey of seventh through eleventh grade U.S. students enrol...

  2. Location Selection of Chinese Modern Railway Logistics Center Based on DEA-Bi-level Programming Model

    OpenAIRE

    Fenling Feng; Feiran Li; Qingya Zhang

    2013-01-01

    Properly planning the modern railway logistics center is a necessary step for the railway logistics operation, which can effectively improve the railway freight service for a seamless connection between the internal and external logistic nodes. The study, from the medium level and depending on the existing railway freight stations with the railway logistics node city, focuses on the site-selection of modern railway logistics center to realize organic combination between newly built railway lo...

  3. Reducing uncertainty in the selection of bi-variate distributions of flood peaks and volumes using copulas and hydrological process-based model selection

    Science.gov (United States)

    Szolgay, Jan; Gaál, Ladislav; Bacigál, Tomáš; Kohnová, Silvia; Blöschl, Günter

    2016-04-01

    and snow-melt floods. First, empirical copulas for the individual processes were compared at each site separately in order to assess whether peak-volume relationships are different for in terms of the respective flood processes. Next, the similarity of empirical distributions was tested in a regional perspective process-wise. In the last step, the goodness-of-fit of frequently used copula types was examined both for process based data samples (the current approach, based on a wider database of flood events) and annual maximum floods (the traditional approach that makes use of a limited number of events). It was concluded, that in order to reduce the uncertainty in model selection and parameter estimation, it is necessary to treat flood processes separately and analyze all available independent floods. Given that usually more than one statistically suitable copula model exists in practice, an uncertainty analysis of the design values in engineering studies resulting from the model selection is necessary. It was shown, that reducing uncertainty in the choice of model can be attempted by a deeper hydrological analysis of the dependence structure/model's suitability in specific hydrological environments or by a more specific distinction of the typical flood generation mechanisms.

  4. Preparatory selection of sterilization regime for canned Natural Atlantic Mackerel with oil based on developed mathematical models of the process

    Directory of Open Access Journals (Sweden)

    Maslov A. A.

    2016-12-01

    Full Text Available Definition of preparatory parameters for sterilization regime of canned "Natural Atlantic Mackerel with Oil" is the aim of current study. PRSC software developed at the department of automation and computer engineering is used for preparatory selection. To determine the parameters of process model, in laboratory autoclave AVK-30M the pre-trial process of sterilization and cooling in water with backpressure of canned "Natural Atlantic Mackerel with Oil" in can N 3 has been performed. Gathering information about the temperature in the autoclave sterilization chamber and the can with product has been carried out using Ellab TrackSense PRO loggers. Due to the obtained information three transfer functions for the product model have been identified: in the least heated area of autoclave, the average heated and the most heated. In PRSC programme temporary temperature dependences in the sterilization chamber have been built using this information. The model of sterilization process of canned "Natural Atlantic Mackerel with Oil" has been received after the pre-trial process. Then in the automatic mode the sterilization regime of canned "Natural Atlantic Mackerel with Oil" has been selected using the value of actual effect close to normative sterilizing effect (5.9 conditional minutes. Furthermore, in this study step-mode sterilization of canned "Natural Atlantic Mackerel with Oil" has been selected. Utilization of step-mode sterilization with the maximum temperature equal to 125 °C in the sterilization chamber allows reduce process duration by 10 %. However, the application of this regime in practice requires additional research. Using the described approach based on the developed mathematical models of the process allows receive optimal step and variable canned food sterilization regimes with high energy efficiency and product quality.

  5. Soft Sensing Modelling Based on Optimal Selection of Secondary Variables and Its Application

    Institute of Scientific and Technical Information of China (English)

    Qi Li; Cheng Shao

    2009-01-01

    The composition of the distillation column is a very important quality value in refineries, unfortunately, few hardware sensors are available on-line to measure the distillation compositions. In this paper, a novel method using sensitivity matrix analysis and kernel ridge regression (KRR) to implement on-line soft sensing of distillation compositions is proposed. In this approach, the sensitivity matrix analysis is presented to select the most suitable secondary variables to be used as the soft sensor's input. The KRR is used to build the composition soft sensor. Application to a simulated distillation column demonstrates the effectiveness of the method.

  6. Process-based models of feeding and prey selection in larval fish

    DEFF Research Database (Denmark)

    Fiksen, O.; MacKenzie, Brian

    2002-01-01

    rates and prey selection in larval cod. Observed pursuit times of larvae are long and approach velocity slow enough to avoid an escape response from prey, but too short to avoid loss of prey at high turbulence levels. The pause-travel search mode is predicted to promote ingestion of larger prey than...... jig dry wt l(-1). The spatio-temporal fluctuation of turbulence (tidal cycle) and light (sun height) over the bank generates complex structure in the patterns of food intake of larval fish, with different patterns emerging for small and large larvae....

  7. The Bases of Speech Pathology and Audiology: Selecting a Therapy Model

    Science.gov (United States)

    Schultz, Martin C.; Carpenter, Mary A.

    1973-01-01

    Described is the basis for therapeutic relationships in speech and hearing and presented are two models of clinical interaction, one directed to behavior change and the other directed to attitude change. (Author)

  8. Attentional spreading to task-irrelevant object features: experimental support and a 3-step model of attention for object-based selection and feature-based processing modulation.

    Science.gov (United States)

    Wegener, Detlef; Galashan, Fingal Orlando; Aurich, Maike Kathrin; Kreiter, Andreas Kurt

    2014-01-01

    Directing attention to a specific feature of an object has been linked to different forms of attentional modulation. Object-based attention theory founds on the finding that even task-irrelevant features at the selected object are subject to attentional modulation, while feature-based attention theory proposes a global processing benefit for the selected feature even at other objects. Most studies investigated either the one or the other form of attention, leaving open the possibility that both object- and feature-specific attentional effects do occur at the same time and may just represent two sides of a single attention system. We here investigate this issue by testing attentional spreading within and across objects, using reaction time (RT) measurements to changes of attended and unattended features on both attended and unattended objects. We asked subjects to report color and speed changes occurring on one of two overlapping random dot patterns (RDPs), presented at the center of gaze. The key property of the stimulation was that only one of the features (e.g., motion direction) was unique for each object, whereas the other feature (e.g., color) was shared by both. The results of two experiments show that co-selection of unattended features even occurs when those features have no means for selecting the object. At the same time, they demonstrate that this processing benefit is not restricted to the selected object but spreads to the task-irrelevant one. We conceptualize these findings by a 3-step model of attention that assumes a task-dependent top-down gain, object-specific feature selection based on task- and binding characteristics, and a global feature-specific processing enhancement. The model allows for the unification of a vast amount of experimental results into a single model, and makes various experimentally testable predictions for the interaction of object- and feature-specific processes.

  9. Attentional spreading to task-irrelevant object features: Experimental support and a 3-step model of attention for object-based selection and feature-based processing modulation

    Directory of Open Access Journals (Sweden)

    Detlef eWegener

    2014-06-01

    Full Text Available Directing attention to a specific feature of an object has been linked to different forms of attentional modulation. Object-based attention theory founds on the finding that even task-irrelevant features at the selected object are subject to attentional modulation, while feature-based attention theory proposes a global processing benefit for the selected feature even at other objects. Most studies investigated either the one or the other form of attention, leaving open the possibility that both object- and feature-specific attentional effects do occur at the same time and may just represent two sides of a single attention system. We here investigate this issue by testing attentional spreading within and across objects, using reaction time measurements to changes of attended and unattended features on both attended and unattended objects. We asked subjects to report color and speed changes occurring on one of two overlapping random dot patterns, presented at the center of gaze. The key property of the stimulation was that only one of the features (e.g. motion direction was unique for each object, whereas the other feature (e.g. color was shared by both. The results of two experiments show that co-selection of unattended features even occurs when those features have no means for selecting the object. At the same time, they demonstrate that this processing benefit is not restricted to the selected object but spreads to the task-irrelevant one. We conceptualize these findings by a 3-step model of attention that assumes a task-dependent top-down gain, object-specific feature selection based on task- and binding characteristics, and a global feature-specific processing enhancement. The model allows for the unification of a vast amount of experimental results into a single model, and makes various experimentally testable predictions for the interaction of object- and feature-specific processes.

  10. Selection Input Output by Restriction Using DEA Models Based on a Fuzzy Delphi Approach and Expert Information

    Science.gov (United States)

    Arsad, Roslah; Nasir Abdullah, Mohammad; Alias, Suriana; Isa, Zaidi

    2017-09-01

    Stock evaluation has always been an interesting problem for investors. In this paper, a comparison regarding the efficiency stocks of listed companies in Bursa Malaysia were made through the application of estimation method of Data Envelopment Analysis (DEA). One of the interesting research subjects in DEA is the selection of appropriate input and output parameter. In this study, DEA was used to measure efficiency of stocks of listed companies in Bursa Malaysia in terms of the financial ratio to evaluate performance of stocks. Based on previous studies and Fuzzy Delphi Method (FDM), the most important financial ratio was selected. The results indicated that return on equity, return on assets, net profit margin, operating profit margin, earnings per share, price to earnings and debt to equity were the most important ratios. Using expert information, all the parameter were clarified as inputs and outputs. The main objectives were to identify most critical financial ratio, clarify them based on expert information and compute the relative efficiency scores of stocks as well as rank them in the construction industry and material completely. The methods of analysis using Alirezaee and Afsharian’s model were employed in this study, where the originality of Charnes, Cooper and Rhodes (CCR) with the assumption of Constant Return to Scale (CSR) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by the Balance Index. The interested data was made for year 2015 and the population of the research includes accepted companies in stock markets in the construction industry and material (63 companies). According to the ranking, the proposed model can rank completely for 63 companies using selected financial ratio.

  11. Selecting among three-mode principal component models of different types and complexities : A numerical convex hull based method

    NARCIS (Netherlands)

    Ceulemans, Eva; Kiers, Henk A.L.

    Several three-mode principal component models can be considered for the modelling of three-way, three-mode data, including the Candecomp/Parafac, Tucker3, Tucker2, and Tucker I models. The following question then may be raised: given a specific data set, which of these models should be selected, and

  12. Selecting among three-mode principal component models of different types and complexities : A numerical convex hull based method

    NARCIS (Netherlands)

    Ceulemans, Eva; Kiers, Henk A.L.

    2006-01-01

    Several three-mode principal component models can be considered for the modelling of three-way, three-mode data, including the Candecomp/Parafac, Tucker3, Tucker2, and Tucker I models. The following question then may be raised: given a specific data set, which of these models should be selected, and

  13. An Exergy-Based Model for Population Dynamics: Adaptation, Mutualism, Commensalism and Selective Extinction

    Directory of Open Access Journals (Sweden)

    Enrico Sciubba

    2012-10-01

    Full Text Available Following the critical analysis of the concept of “sustainability”, developed on the basis of exergy considerations in previous works, an analysis of possible species “behavior” is presented and discussed in this paper. Once more, we make use of one single axiom: that resource consumption (material and immaterial can be quantified solely in terms of exergy flows. This assumption leads to a model of population dynamics that is applied here to describe the general behavior of interacting populations. The resulting equations are similar to the Lotka-Volterra ones, but more strongly coupled and intrinsically non-linear: as such, their solution space is topologically richer than those of classical prey-predator models. In this paper, we address an interesting specific problem in population dynamics: if a species assumes a commensalistic behavior, does it gain an evolutionary advantage? And, what is the difference, in terms of the access to the available exergy resources, between mutualism and commensalism? The model equations can be easily rearranged to accommodate both types of behavior, and thus only a brief discussion is devoted to this facet of the problem. The solution space is explored in the simplest case of two interacting populations: the model results in population curves in phase space that can satisfactorily explain the evolutionistic advantages and drawbacks of either behavior and, more importantly, identify the presence or absence of a “sustainable” solution in which both species survive.

  14. Proposing a New Approach for Supplier Selection Based on Kraljic’s Model Using FMEA and Integer Linear Programming

    Directory of Open Access Journals (Sweden)

    S. Mohammad Arabzad

    2012-06-01

    Full Text Available In recent years, numerous methods have been proposed to deal with supplier evaluation and selection problem, but a point which has been usually neglected by researchers is the role of purchasing items. The aim of this paper is to propose an integrated approach to select suppliers and allocate orders on the basis of the nature of the purchasing items which means that this issue plays an important role in supplier selection and order allocation. Therefore, items are first categorized according to the Kraljic’s model by the use of FMEA technique. Then, suppliers are categorized and evaluated in four phases with respect to different types of purchasing items (Strategic, Bottleneck, Leverage and Routine. Finally, an integer linear programming is utilized to allocate purchasing orders to suppliers. Furthermore, an empirical example is conducted to illustrate the stage of proposed approach. Results imply that ranking of suppliers and allocation of purchasing items based on the nature of purchasing items will create more capabilities in managing purchasing items and suppliers .

  15. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built...

  16. Model-Based Dose Selection for Intravaginal Ring Formulations Releasing Anastrozole and Levonorgestrel Intended for the Treatment of Endometriosis Symptoms.

    Science.gov (United States)

    Reinecke, Isabel; Schultze-Mosgau, Marcus-Hillert; Nave, Rüdiger; Schmitz, Heinz; Ploeger, Bart A

    2017-05-01

    Pharmacokinetics (PK) of anastrozole (ATZ) and levonorgestrel (LNG) released from an intravaginal ring (IVR) intended to treat endometriosis symptoms were characterized, and the exposure-response relationship focusing on the development of large ovarian follicle-like structures was investigated by modeling and simulation to support dose selection for further studies. A population PK analysis and simulations were performed for ATZ and LNG based on clinical phase 1 study data from 66 healthy women. A PK/PD model was developed to predict the probability of a maximum follicle size ≥30 mm and the potential contribution of ATZ beside the known LNG effects. Population PK models for ATZ and LNG were established where the interaction of LNG with sex hormone-binding globulin (SHBG) as well as a stimulating effect of estradiol on SHBG were considered. Furthermore, simulations showed that doses of 40 μg/d LNG combined with 300, 600, or 1050 μg/d ATZ reached anticipated exposure levels for both drugs, facilitating selection of ATZ and LNG doses in the phase 2 dose-finding study. The main driver for the effect on maximum follicle size appears to be unbound LNG exposure. A 50% probability of maximum follicle size ≥30 mm was estimated for 40 μg/d LNG based on the exposure-response analysis. ATZ in the dose range investigated does not increase the risk for ovarian cysts as occurs with LNG at a dose that does not inhibit ovulation. © 2016, The American College of Clinical Pharmacology.

  17. Model based estimation for multi-modal user interface component selection

    CSIR Research Space (South Africa)

    Coetzee, L

    2009-12-01

    Full Text Available the use of their five senses. The five senses namely sight, hear- ing, touch, smell and taste can be translated into different per- ceptual pathways (or modalities). The perceptual learning styles model developed by Russell French, Daryl Gilley, and Ed... elements such as mu- sic, sounds and Text-To-Speech as audible output can be impor- tant, while these might not be important to an individual with a visual preference. Helper applications can be used to trans- form content from one format to another e...

  18. Model selection bias and Freedman's paradox

    Science.gov (United States)

    Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.

    2010-01-01

    In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.

  19. Developing a suitable model for supplier selection based on supply chain risks: an empirical study from Iranian pharmaceutical companies.

    Science.gov (United States)

    Mehralian, Gholamhossein; Rajabzadeh Gatari, Ali; Morakabati, Mohadese; Vatanpour, Hossein

    2012-01-01

    The supply chain represents the critical link between the development of new product and the market in pharmaceutical industry. Over the years, improvements made in supply chain operations have focused largely on ways to reduce cost and gain efficiencies in scale. In addition, powerful regulatory and market forces have provided new incentives for pharmaceutical firms to basically rethink the way they produce and distribute products, and also to re-imagine the role of the supply chain in driving strategic growth, brand differentiation and economic value in the health continuum. The purpose of this paper is to formulate basic factors involved in risk analysis of pharmaceutical industry, and also determine the effective factors involved in suppliers selection and their priorities. This paper is based on the results of literature review, experts' opinion acquisition, statistical analysis and also using MADM models on data gathered from distributed questionnaires. The model consists of the following steps and components: first factors involved in to supply chain risks are determined. Based on them a framework is considered. According the result of statistical analysis and MADM models the risk factors are formulated. The paper determines the main components and influenceial factors involving in the supply chain risks. Results showed that delivery risk can make an important contribution to mitigate the risk of pharmaceutical industry.

  20. The impact of an educational program based on BASNEF model on the selection of a contraceptive method in women.

    Science.gov (United States)

    Sarayloo, Khadijeh; Moghadam, Zahra Behboodi; Mansoure, Jamshidi Manesh; Mostafa, Hosseini; Mohsen, Saffari

    2015-01-01

    Quality of services, making communications with target groups, and educating them are among the most important success factors in implementation of family planning programs. Provision of public access to contraception and related methods, counseling services, and paying attention to social specification and cultures are important in promotion of service quality. With regard to applicability of health education theories and models, the present study aimed to find the impact of an educational program based on BASNEF model to choose contraceptive methods in women referring to health care centers in Minoodasht in 2012. This is a quasi-experimental study. Data were collected using BASNEF questionnaire by the researcher from women referring to health care centers in two groups of study and control (n = 100 in each group). Educational intervention (in the form of four educational 1-h sessions once a week during 1 month and two additional review sessions) was conducted in the form of group and face-to-face discussions and educational booklets were distributed. Data were analyzed by Chi-square, analysis of covariance (ANCOVA), paired t-test, and t-test through SPSS version 14. After intervention, mean score of knowledge was significantly higher in the study group compared to control (P educational intervention was effective in increasing women's knowledge, attitude, and practice. As family and educational facilities are among the influencing factors on contraceptive method selection, interventional planning is hoped to be based on educational model.

  1. Selected System Models

    Science.gov (United States)

    Schmidt-Eisenlohr, F.; Puñal, O.; Klagges, K.; Kirsche, M.

    Apart from the general issue of modeling the channel, the PHY and the MAC of wireless networks, there are specific modeling assumptions that are considered for different systems. In this chapter we consider three specific wireless standards and highlight modeling options for them. These are IEEE 802.11 (as example for wireless local area networks), IEEE 802.16 (as example for wireless metropolitan networks) and IEEE 802.15 (as example for body area networks). Each section on these three systems discusses also at the end a set of model implementations that are available today.

  2. A hybrid multi-criteria decision modeling approach for the best biodiesel blend selection based on ANP-TOPSIS analysis

    Directory of Open Access Journals (Sweden)

    G. Sakthivel

    2015-03-01

    Full Text Available The ever increasing demand and depletion of fossil fuels had an adverse impact on environmental pollution. The selection of appropriate source of biodiesel and proper blending of biodiesel plays a major role in alternate energy production. This paper describes an application of hybrid Multi Criteria Decision Making (MCDM technique for the selection of optimum fuel blend in fish oil biodiesel for the IC engine. The proposed model, Analytical Network Process (ANP is integrated with Technique for Order Performance by Similarity to Ideal Solution (TOPSIS and VlseKriterijumska Optimizacija I Kompromisno Resenje (in Serbian (VIKOR to evaluate the optimum blend. Evaluation of suitable blend is based on the exploratory analysis of the performance, emission and combustion parameters of the single cylinder, constant speed direct injection diesel engine at different load conditions. Here the ANP is used to determine the relative weights of the criteria, whereas TOPSIS and VIKOR are used for obtaining the final ranking of alternative blends. An efficient pair-wise comparison process and ranking of alternatives can be achieved for optimum blend selection through the integration of ANP with TOPSIS and VIKOR. The obtained preference order of the blends for ANP-VIKOR and ANP-TOPSIS are B20 > Diesel > B40 > B60 > B80 > B100 and B20 > B40 > Diesel > B60 > B80 > B100 respectively. Hence by comparing both these methods, B20 is selected as the best blend to operate the internal combustion engines. This paper highlights a new insight into MCDM techniques to evaluate the best fuel blend for the decision makers such as engine manufactures and R& D engineers to meet the fuel economy and emission norms to empower the green revolution.

  3. Secondary eclipses in the CoRoT light curves: A homogeneous search based on Bayesian model selection

    CERN Document Server

    Parviainen, Hannu; Belmonte, Juan Antonio

    2012-01-01

    We aim to identify and characterize secondary eclipses in the original light curves of all published CoRoT planets using uniform detection and evaluation critetia. Our analysis is based on a Bayesian model selection between two competing models: one with and one without an eclipse signal. The search is carried out by mapping the Bayes factor in favor of the eclipse model as a function of the eclipse center time, after which the characterization of plausible eclipse candidates is done by estimating the posterior distributions of the eclipse model parameters using Markov Chain Monte Carlo. We discover statistically significant eclipse events for two planets, CoRoT-6b and CoRoT-11b, and for one brown dwarf, CoRoT-15b. We also find marginally significant eclipse events passing our plausibility criteria for CoRoT-3b, 13b, 18b, and 21b. The previously published CoRoT-1b and CoRoT-2b eclipses are also confirmed.

  4. Physics-based simulation modeling and optimization of microstructural changes induced by machining and selective laser melting processes in titanium and nickel based alloys

    Science.gov (United States)

    Arisoy, Yigit Muzaffer

    Manufacturing processes may significantly affect the quality of resultant surfaces and structural integrity of the metal end products. Controlling manufacturing process induced changes to the product's surface integrity may improve the fatigue life and overall reliability of the end product. The goal of this study is to model the phenomena that result in microstructural alterations and improve the surface integrity of the manufactured parts by utilizing physics-based process simulations and other computational methods. Two different (both conventional and advanced) manufacturing processes; i.e. machining of Titanium and Nickel-based alloys and selective laser melting of Nickel-based powder alloys are studied. 3D Finite Element (FE) process simulations are developed and experimental data that validates these process simulation models are generated to compare against predictions. Computational process modeling and optimization have been performed for machining induced microstructure that includes; i) predicting recrystallization and grain size using FE simulations and the Johnson-Mehl-Avrami-Kolmogorov (JMAK) model, ii) predicting microhardness using non-linear regression models and the Random Forests method, and iii) multi-objective machining optimization for minimizing microstructural changes. Experimental analysis and computational process modeling of selective laser melting have been also conducted including; i) microstructural analysis of grain sizes and growth directions using SEM imaging and machine learning algorithms, ii) analysis of thermal imaging for spattering, heating/cooling rates and meltpool size, iii) predicting thermal field, meltpool size, and growth directions via thermal gradients using 3D FE simulations, iv) predicting localized solidification using the Phase Field method. These computational process models and predictive models, once utilized by industry to optimize process parameters, have the ultimate potential to improve performance of

  5. A statistically based seasonal precipitation forecast model with automatic predictor selection and its application to central and south Asia

    Science.gov (United States)

    Gerlitz, Lars; Vorogushyn, Sergiy; Apel, Heiko; Gafurov, Abror; Unger-Shayesteh, Katy; Merz, Bruno

    2016-11-01

    The study presents a statistically based seasonal precipitation forecast model, which automatically identifies suitable predictors from globally gridded sea surface temperature (SST) and climate variables by means of an extensive data-mining procedure and explicitly avoids the utilization of typical large-scale climate indices. This leads to an enhanced flexibility of the model and enables its automatic calibration for any target area without any prior assumption concerning adequate predictor variables. Potential predictor variables are derived by means of a cell-wise correlation analysis of precipitation anomalies with gridded global climate variables under consideration of varying lead times. Significantly correlated grid cells are subsequently aggregated to predictor regions by means of a variability-based cluster analysis. Finally, for every month and lead time, an individual random-forest-based forecast model is constructed, by means of the preliminary generated predictor variables. Monthly predictions are aggregated to running 3-month periods in order to generate a seasonal precipitation forecast. The model is applied and evaluated for selected target regions in central and south Asia. Particularly for winter and spring in westerly-dominated central Asia, correlation coefficients between forecasted and observed precipitation reach values up to 0.48, although the variability of precipitation rates is strongly underestimated. Likewise, for the monsoonal precipitation amounts in the south Asian target area, correlations of up to 0.5 were detected. The skill of the model for the dry winter season over south Asia is found to be low. A sensitivity analysis with well-known climate indices, such as the El Niño- Southern Oscillation (ENSO), the North Atlantic Oscillation (NAO) and the East Atlantic (EA) pattern, reveals the major large-scale controlling mechanisms of the seasonal precipitation climate for each target area. For the central Asian target areas, both

  6. Prediction error variance and expected response to selection, when selection is based on the best predictor - for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    DEFF Research Database (Denmark)

    Andersen, Anders Holst; Korsgaard, Inge Riis; Jensen, Just

    2002-01-01

    In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed...... or random effects). In the different models, expressions are given (when these can be found - otherwise unbiased estimates are given) for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non...... Gaussian traits are generalisations of the well-known formulas for Gaussian traits - and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part...

  7. Model Selection Principles in Misspecified Models

    CERN Document Server

    Lv, Jinchi

    2010-01-01

    Model selection is of fundamental importance to high dimensional modeling featured in many contemporary applications. Classical principles of model selection include the Kullback-Leibler divergence principle and the Bayesian principle, which lead to the Akaike information criterion and Bayesian information criterion when models are correctly specified. Yet model misspecification is unavoidable when we have no knowledge of the true model or when we have the correct family of distributions but miss some true predictor. In this paper, we propose a family of semi-Bayesian principles for model selection in misspecified models, which combine the strengths of the two well-known principles. We derive asymptotic expansions of the semi-Bayesian principles in misspecified generalized linear models, which give the new semi-Bayesian information criteria (SIC). A specific form of SIC admits a natural decomposition into the negative maximum quasi-log-likelihood, a penalty on model dimensionality, and a penalty on model miss...

  8. Bayesian Model Selection and Statistical Modeling

    CERN Document Server

    Ando, Tomohiro

    2010-01-01

    Bayesian model selection is a fundamental part of the Bayesian statistical modeling process. The quality of these solutions usually depends on the goodness of the constructed Bayesian model. Realizing how crucial this issue is, many researchers and practitioners have been extensively investigating the Bayesian model selection problem. This book provides comprehensive explanations of the concepts and derivations of the Bayesian approach for model selection and related criteria, including the Bayes factor, the Bayesian information criterion (BIC), the generalized BIC, and the pseudo marginal lik

  9. A Codon-Based Model of Host-Specific Selection in Parasites, with an Application to the Influenza A Virus

    DEFF Research Database (Denmark)

    Forsberg, Ronald; Christiansen, Freddy Bugge

    2003-01-01

    involved in hostspecific adaptation. We discuss the applicability of the model to the more general problem of ascertaining whether the selective regime differs between two groups of related organisms. The utility of the model is illustrated on a dataset of nucleoprotein sequences from the influenza A virus...

  10. A Codon-Based Model of Host-Specific Selection in Parasites, with an Application to the Influenza A Virus

    DEFF Research Database (Denmark)

    Forsberg, Ronald; Christiansen, Freddy Bugge

    2003-01-01

    involved in hostspecific adaptation. We discuss the applicability of the model to the more general problem of ascertaining whether the selective regime differs between two groups of related organisms. The utility of the model is illustrated on a dataset of nucleoprotein sequences from the influenza A virus...

  11. Statistical Analysis of Wind Power Density Based on the Weibull and Rayleigh Models of Selected Site in Malaysia

    Directory of Open Access Journals (Sweden)

    Aliashim Albani

    2014-02-01

    Full Text Available The demand for electricity in Malaysia is growing in tandem with its Gross Domestic Product (GDP growth. Malaysia is going to need even more energy as it strives to grow towards a high-income economy. Malaysia has taken steps to exploring the renewable energy (RE including wind energy as an alternative source for generating electricity. In the present study, the wind energy potential of the site is statistically analyzed based on 1-year measured hourly time-series wind speed data. Wind data were obtained from the Malaysian Meteorological Department (MMD weather stations at nine selected sites in Malaysia. The data were calculated by using the MATLAB programming to determine and generate the Weibull and Rayleigh distribution functions. Both Weibull and Rayleigh models are fitted and compared to the Field data probability distributions of year 2011. From the analysis, it was shown that the Weibull distribution is fitting the Field data better than the Rayleigh distribution for the whole year 2011. The wind power density of every site has been studied based on the Weibull and Rayleigh functions. The Weibull distribution shows a good approximation for estimation of wind power density in Malaysia.

  12. A blue-LED-based device for selective photocoagulation of superficial abrasions: theoretical modeling and in vivo validation

    Science.gov (United States)

    Rossi, Francesca; Pini, Roberto; De Siena, Gaetano; Massi, Daniela; Pavone, Francesco S.; Alfieri, Domenico; Cannarozzo, Giovanni

    2010-02-01

    The blue light (~400 nm) emitted by high power Light Emitting Diodes (LED) is selectively absorbed by the haemoglobin content of blood and then converted into heat. This is the basic concept in setting up a compact, low-cost, and easy-to-handle photohaemostasis device for the treatment of superficial skin abrasions. Its main application is in reducing bleeding from superficial capillary vessels during laser induced aesthetic treatments, such as skin resurfacing, thus reducing the treatment time and improving aesthetic results (reduction of scar formation). In this work we firstly present the preliminary modeling study: a Finite Element Model (FEM) of the LED induced photothermal process was set up, in order to estimate the optimal wavelength and treatment time, by studying the temperature dynamics in the tissue. Then, a compact, handheld illumination device has been designed: commercially available high power LEDs emitting in the blue region were mounted in a suitable and ergonomic case. The prototype was tested in the treatment of dorsal excoriations in rats. Thermal effects were monitored by an infrared thermocamera, experimentally evidencing the modest and confined heating effects and confirming the modeling predictions. Objective observations and histopathological analysis performed in a follow-up study showed no adverse reactions and no thermal damage in the treated areas and surrounding tissues. The device was then used in human patients, in order to stop bleeding during Erbium laser skin resurfacing procedure. By inducing LED-based photocoagulation, the overall treatment time was shortened and scar formation was reduced, thus enhancing esthetic effect of the laser procedure.

  13. Genomic selection models double the accuracy of predicted breeding values for bacterial cold water disease resistance compared to a traditional pedigree-based model in rainbow trout aquaculture.

    Science.gov (United States)

    Vallejo, Roger L; Leeds, Timothy D; Gao, Guangtu; Parsons, James E; Martin, Kyle E; Evenhuis, Jason P; Fragomeni, Breno O; Wiens, Gregory D; Palti, Yniv

    2017-02-01

    Previously, we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative that enables exploitation of within-family genetic variation. We compared three GS models [single-step genomic best linear unbiased prediction (ssGBLUP), weighted ssGBLUP (wssGBLUP), and BayesB] to predict genomic-enabled breeding values (GEBV) for BCWD resistance in a commercial rainbow trout population, and compared the accuracy of GEBV to traditional estimates of breeding values (EBV) from a pedigree-based BLUP (P-BLUP) model. We also assessed the impact of sampling design on the accuracy of GEBV predictions. For these comparisons, we used BCWD survival phenotypes recorded on 7893 fish from 102 families, of which 1473 fish from 50 families had genotypes [57 K single nucleotide polymorphism (SNP) array]. Naïve siblings of the training fish (n = 930 testing fish) were genotyped to predict their GEBV and mated to produce 138 progeny testing families. In the following generation, 9968 progeny were phenotyped to empirically assess the accuracy of GEBV predictions made on their non-phenotyped parents. The accuracy of GEBV from all tested GS models were substantially higher than the P-BLUP model EBV. The highest increase in accuracy relative to the P-BLUP model was achieved with BayesB (97.2 to 108.8%), followed by wssGBLUP at iteration 2 (94.4 to 97.1%) and 3 (88.9 to 91.2%) and ssGBLUP (83.3 to 85.3%). Reducing the training sample size to n = ~1000 had no negative impact on the accuracy (0.67 to 0.72), but with n = ~500 the accuracy dropped to 0.53 to 0.61 if the training and testing fish were full-sibs, and even substantially lower, to 0.22 to 0.25, when they were not full-sibs. Using progeny performance data, we showed that the accuracy of genomic predictions is substantially higher

  14. Model Selection and Evaluation Based on Emerging Infectious Disease Data Sets including A/H1N1 and Ebola

    Directory of Open Access Journals (Sweden)

    Wendi Liu

    2015-01-01

    Full Text Available The aim of the present study is to apply simple ODE models in the area of modeling the spread of emerging infectious diseases and show the importance of model selection in estimating parameters, the basic reproduction number, turning point, and final size. To quantify the plausibility of each model, given the data and the set of four models including Logistic, Gompertz, Rosenzweg, and Richards models, the Bayes factors are calculated and the precise estimates of the best fitted model parameters and key epidemic characteristics have been obtained. In particular, for Ebola the basic reproduction numbers are 1.3522 (95% CI (1.3506, 1.3537, 1.2101 (95% CI (1.2084, 1.2119, 3.0234 (95% CI (2.6063, 3.4881, and 1.9018 (95% CI (1.8565, 1.9478, the turning points are November 7,November 17, October 2, and November 3, 2014, and the final sizes until December 2015 are 25794 (95% CI (25630, 25958, 3916 (95% CI (3865, 3967, 9886 (95% CI (9740, 10031, and 12633 (95% CI (12515, 12750 for West Africa, Guinea, Liberia, and Sierra Leone, respectively. The main results confirm that model selection is crucial in evaluating and predicting the important quantities describing the emerging infectious diseases, and arbitrarily picking a model without any consideration of alternatives is problematic.

  15. Fusion strategies for selecting multiple tuning parameters for multivariate calibration and other penalty based processes: A model updating application for pharmaceutical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Tencate, Alister J. [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); Kalivas, John H., E-mail: kalijohn@isu.edu [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); White, Alexander J. [Department of Physics and Optical Engineering, Rose-Hulman Institute of Technology, Terre Huate, IN 47803 (United States)

    2016-05-19

    New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of

  16. A Hybrid Multi-Step Model for Forecasting Day-Ahead Electricity Price Based on Optimization, Fuzzy Logic and Model Selection

    Directory of Open Access Journals (Sweden)

    Ping Jiang

    2016-08-01

    Full Text Available The day-ahead electricity market is closely related to other commodity markets such as the fuel and emission markets and is increasingly playing a significant role in human life. Thus, in the electricity markets, accurate electricity price forecasting plays significant role for power producers and consumers. Although many studies developing and proposing highly accurate forecasting models exist in the literature, there have been few investigations on improving the forecasting effectiveness of electricity price from the perspective of reducing the volatility of data with satisfactory accuracy. Based on reducing the volatility of the electricity price and the forecasting nature of the radial basis function network (RBFN, this paper successfully develops a two-stage model to forecast the day-ahead electricity price, of which the first stage is particle swarm optimization (PSO-core mapping (CM with self-organizing-map and fuzzy set (PCMwSF, and the second stage is selection rule (SR. The PCMwSF stage applies CM, fuzzy set and optimized weights to obtain the future price, and the SR stage is inspired by the forecasting nature of RBFN and effectively selects the best forecast during the test period. The proposed model, i.e., CM-PCMwSF-SR, not only overcomes the difficulty of reducing the high volatility of the electricity price but also leads to a superior forecasting effectiveness than benchmarks.

  17. Neutrosophic Decision Making Model for Clay-Brick Selection in Construction Field Based on Grey Relational Analysis

    Directory of Open Access Journals (Sweden)

    Kalyan Mondal

    2015-09-01

    Full Text Available The purpose of this paper is to present quality clay-brick selection approach based on multi-attribute decision making with single valued neutrosophic grey relational analysis. Brick plays a significant role in construction field. So it is important to select quality clay-brick for construction based on suitable mathematical decision making tool. There are several selection methods in the literature. Among them decision making with neutrosophic set is very pragmatic and interesting. Neutrosophic set is one tool that can deal with indeterminacy and inconsistent data. In the proposed method, the rating of all alternatives is expressed with single-valued neutrosophic set which is characterized by truth-membership degree (acceptance, indeterminacy membership degree and falsity membership degree (rejection. Weight of each attribute is determined based on experts’ opinions. Neutrosophic grey relational coefficient is used based on Hamming distance between each alternative to ideal neutrosophic estimates reliability solution and ideal neutrosophic estimates unreliability solution. Then neutrosophic relational degree is used to determine the ranking order of all alternatives (bricks. An illustrative numerical example for quality brick selection is solved to show the effectiveness of the proposed method.

  18. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  19. APPLICATION OF THE MODEL CERNE FOR THE ESTABLISHMENT OF CRITERIA INCUBATION SELECTION IN TECHNOLOGY BASED BUSINESSES : A STUDY IN INCUBATORS OF TECHNOLOGICAL BASE OF THE COUNTRY

    Directory of Open Access Journals (Sweden)

    Clobert Jefferson Passoni

    2017-03-01

    Full Text Available Business incubators are a great source of encouragement for innovative projects, enabling the development of new technologies, providing infrastructure, advice and support, which are key elements for the success of new business. The technology-based firm incubators (TBFs, which are 154 in Brazil. Each one of them has its own mechanism for the selection of the incubation companies. Because of the different forms of management of incubators, the business model CERNE - Reference Center for Support for New Projects - was created by Anprotec and Sebrae, in order to standardize procedures and promote the increase of chances for success in the incubations. The objective of this study is to propose selection criteria for the incubation, considering CERNE’s five dimensions and aiming to help on the decision-making in the assessment of candidate companies in a TBF incubator. The research was conducted from the public notices of 20 TBF incubators, where 38 selection criteria were identified and classified. Managers of TBF incubators validated 26 criteria by its importance via online questionnaires. As a result, favorable ratings were obtained to 25 of them. Only one criterion differed from the others, with a unfavorable rating.

  20. Lung cancer risk prediction to select smokers for screening CT--a model based on the Italian COSMOS trial.

    Science.gov (United States)

    Maisonneuve, Patrick; Bagnardi, Vincenzo; Bellomi, Massimo; Spaggiari, Lorenzo; Pelosi, Giuseppe; Rampinelli, Cristiano; Bertolotti, Raffaella; Rotmensz, Nicole; Field, John K; Decensi, Andrea; Veronesi, Giulia

    2011-11-01

    Screening with low-dose helical computed tomography (CT) has been shown to significantly reduce lung cancer mortality but the optimal target population and time interval to subsequent screening are yet to be defined. We developed two models to stratify individual smokers according to risk of developing lung cancer. We first used the number of lung cancers detected at baseline screening CT in the 5,203 asymptomatic participants of the COSMOS trial to recalibrate the Bach model, which we propose using to select smokers for screening. Next, we incorporated lung nodule characteristics and presence of emphysema identified at baseline CT into the Bach model and proposed the resulting multivariable model to predict lung cancer risk in screened smokers after baseline CT. Age and smoking exposure were the main determinants of lung cancer risk. The recalibrated Bach model accurately predicted lung cancers detected during the first year of screening. Presence of nonsolid nodules (RR = 10.1, 95% CI = 5.57-18.5), nodule size more than 8 mm (RR = 9.89, 95% CI = 5.84-16.8), and emphysema (RR = 2.36, 95% CI = 1.59-3.49) at baseline CT were all significant predictors of subsequent lung cancers. Incorporation of these variables into the Bach model increased the predictive value of the multivariable model (c-index = 0.759, internal validation). The recalibrated Bach model seems suitable for selecting the higher risk population for recruitment for large-scale CT screening. The Bach model incorporating CT findings at baseline screening could help defining the time interval to subsequent screening in individual participants. Further studies are necessary to validate these models.

  1. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  2. Bayesian Evidence and Model Selection

    CERN Document Server

    Knuth, Kevin H; Malakar, Nabin K; Mubeen, Asim M; Placek, Ben

    2014-01-01

    In this paper we review the concept of the Bayesian evidence and its application to model selection. The theory is presented along with a discussion of analytic, approximate and numerical techniques. Application to several practical examples within the context of signal processing are discussed.

  3. Fusion strategies for selecting multiple tuning parameters for multivariate calibration and other penalty based processes: A model updating application for pharmaceutical analysis.

    Science.gov (United States)

    Tencate, Alister J; Kalivas, John H; White, Alexander J

    2016-05-19

    New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of

  4. Genomic selection models double the accuracy of predicted breeding values for bacterial cold water disease resistance compared to a traditional pedigree-based model in rainbow trout aquaculture

    Science.gov (United States)

    Previously we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative enabling exploitation...

  5. Optimization model for the selection of materials using a LEED-based green building rating system in Colombia

    Energy Technology Data Exchange (ETDEWEB)

    Castro-Lacouture, Daniel [Building Construction Program, College of Architecture, Georgia Institute of Technology, 280 Ferst Drive, Atlanta, GA 30332 (United States); Sefair, Jorge A.; Florez, Laura; Medaglia, Andres L. [Centro de Optimizacion y Probabilidad Aplicada (COPA), Departamento de Ingenieria Industrial, Universidad de los Andes, Bogota D.C. (Colombia)

    2009-06-15

    Buildings have a significant and continuously increasing impact on the environment because they are responsible for a large portion of carbon emissions and use a considerable number of resources and energy. The green building movement emerged to mitigate these effects and to improve the building construction process. This paradigm shift should bring significant environmental, economic, financial, and social benefits. However, to realize such benefits, efforts are required not only in the selection of appropriate technologies but also in the choice of proper materials. Selecting inappropriate materials can be expensive, but more importantly, it may preclude the achievement of the desired environmental goals. In order to help decision-makers with the selection of the right materials, this study proposes a mixed integer optimization model that incorporates design and budget constraints while maximizing the number of credits reached under the Leadership in Energy and Environmental Design (LEED) rating system. To illustrate this model, this paper presents a case study of a building in Colombia in which a modified version of LEED is proposed. (author)

  6. Melody Track Selection Using Discriminative Language Model

    Science.gov (United States)

    Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong

    In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.

  7. Prediction error variance and expected response to selection, when selection is based on the best predictor – for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    Directory of Open Access Journals (Sweden)

    Jensen Just

    2002-05-01

    Full Text Available Abstract In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed or random effects. In the different models, expressions are given (when these can be found – otherwise unbiased estimates are given for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non Gaussian traits are generalisations of the well-known formulas for Gaussian traits – and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part of the model (heritability on the normally distributed level of the model or a generalised version of heritability plays a central role in these formulas.

  8. Prediction error variance and expected response to selection, when selection is based on the best predictor – for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    Science.gov (United States)

    Korsgaard, Inge Riis; Andersen, Anders Holst; Jensen, Just

    2002-01-01

    In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed or random effects). In the different models, expressions are given (when these can be found – otherwise unbiased estimates are given) for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non Gaussian traits are generalisations of the well-known formulas for Gaussian traits – and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part of the model (heritability on the normally distributed level of the model) or a generalised version of heritability plays a central role in these formulas. PMID:12081800

  9. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, M.; Baker, N.A.; Duguid, J.O. [INTERA, Inc., Las Vegas, NV (United States)

    1994-04-04

    Since the 1960`s, ground-water flow models have been used for analysis of water resources problems. In the 1970`s, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970`s and well into the 1980`s focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M&O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing.

  10. MODEL SELECTION FOR LOG-LINEAR MODELS OF CONTINGENCY TABLES

    Institute of Scientific and Technical Information of China (English)

    ZHAO Lincheng; ZHANG Hong

    2003-01-01

    In this paper, we propose an information-theoretic-criterion-based model selection procedure for log-linear model of contingency tables under multinomial sampling, and establish the strong consistency of the method under some mild conditions. An exponential bound of miss detection probability is also obtained. The selection procedure is modified so that it can be used in practice. Simulation shows that the modified method is valid. To avoid selecting the penalty coefficient in the information criteria, an alternative selection procedure is given.

  11. Model Selection for Pion Photoproduction

    CERN Document Server

    Landay, J; Fernández-Ramírez, C; Hu, B; Molina, R

    2016-01-01

    Partial-wave analysis of meson and photon-induced reactions is needed to enable the comparison of many theoretical approaches to data. In both energy-dependent and independent parametrizations of partial waves, the selection of the model amplitude is crucial. Principles of the $S$-matrix are implemented to different degree in different approaches, but a many times overlooked aspect concerns the selection of undetermined coefficients and functional forms for fitting, leading to a minimal yet sufficient parametrization. We present an analysis of low-energy neutral pion photoproduction using the Least Absolute Shrinkage and Selection Operator (LASSO) in combination with criteria from information theory and $K$-fold cross validation. These methods are not yet widely known in the analysis of excited hadrons but will become relevant in the era of precision spectroscopy. The principle is first illustrated with synthetic data, then, its feasibility for real data is demonstrated by analyzing the latest available measu...

  12. Entropic criterion for model selection

    Science.gov (United States)

    Tseng, Chih-Yuan

    2006-10-01

    Model or variable selection is usually achieved through ranking models according to the increasing order of preference. One of methods is applying Kullback-Leibler distance or relative entropy as a selection criterion. Yet that will raise two questions, why use this criterion and are there any other criteria. Besides, conventional approaches require a reference prior, which is usually difficult to get. Following the logic of inductive inference proposed by Caticha [Relative entropy and inductive inference, in: G. Erickson, Y. Zhai (Eds.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP Conference Proceedings, vol. 707, 2004 (available from arXiv.org/abs/physics/0311093)], we show relative entropy to be a unique criterion, which requires no prior information and can be applied to different fields. We examine this criterion by considering a physical problem, simple fluids, and results are promising.

  13. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    Energy Technology Data Exchange (ETDEWEB)

    Louit, D.M. [Komatsu Chile, Av. Americo Vespucio 0631, Quilicura, Santiago (Chile)], E-mail: rpascual@ing.puc.cl; Pascual, R. [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna Mackenna 4860, Santiago (Chile); Jardine, A.K.S. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ont., M5S 3G8 (Canada)

    2009-10-15

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  14. A Selective Review of Group Selection in High Dimensional Models

    CERN Document Server

    Huang, Jian; Ma, Shuangge

    2012-01-01

    Grouping structures arise naturally in many statistical modeling problems. Several methods have been proposed for variable selection that respect grouping structure in variables. Examples include the group LASSO and several concave group selection methods. In this article, we give a selective review of group selection concerning methodological developments, theoretical properties, and computational algorithms. We pay particular attention to group selection methods involving concave penalties. We address both group selection and bi-level selection methods. We describe several applications of these methods in nonparametric additive models, semiparametric regression, seemingly unrelated regressions, genomic data analysis and genome wide association studies. We also highlight some issues that require further study.

  15. MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS

    Energy Technology Data Exchange (ETDEWEB)

    Asensio Ramos, A.; Manso Sainz, R.; Martinez Gonzalez, M. J.; Socas-Navarro, H. [Instituto de Astrofisica de Canarias, E-38205, La Laguna, Tenerife (Spain); Viticchie, B. [ESA/ESTEC RSSD, Keplerlaan 1, 2200 AG Noordwijk (Netherlands); Orozco Suarez, D., E-mail: aasensio@iac.es [National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588 (Japan)

    2012-04-01

    Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.

  16. Selected soil thermal conductivity models

    Directory of Open Access Journals (Sweden)

    Rerak Monika

    2017-01-01

    Full Text Available The paper presents collected from the literature models of soil thermal conductivity. This is a very important parameter, which allows one to assess how much heat can be transferred from the underground power cables through the soil. The models are presented in table form, thus when the properties of the soil are given, it is possible to select the most accurate method of calculating its thermal conductivity. Precise determination of this parameter results in designing the cable line in such a way that it does not occur the process of cable overheating.

  17. Multi-dimensional model order selection

    Directory of Open Access Journals (Sweden)

    Roemer Florian

    2011-01-01

    Full Text Available Abstract Multi-dimensional model order selection (MOS techniques achieve an improved accuracy, reliability, and robustness, since they consider all dimensions jointly during the estimation of parameters. Additionally, from fundamental identifiability results of multi-dimensional decompositions, it is known that the number of main components can be larger when compared to matrix-based decompositions. In this article, we show how to use tensor calculus to extend matrix-based MOS schemes and we also present our proposed multi-dimensional model order selection scheme based on the closed-form PARAFAC algorithm, which is only applicable to multi-dimensional data. In general, as shown by means of simulations, the Probability of correct Detection (PoD of our proposed multi-dimensional MOS schemes is much better than the PoD of matrix-based schemes.

  18. Construction and realization of the knowledge base and inference engine of an IDSS model for air-conditioning cooling/heating sources selection

    Institute of Scientific and Technical Information of China (English)

    LiuYing; WangRuzhu; LiYunfei; ZhangXiaosong

    2003-01-01

    The knowledge representation mode and inference control strategy were analyzed according to the specialties of air-conditioning cooling/heating sources selection. The constructing idea and working procedure for knowledge base and inference engine were proposed while the realization technique of the C language was discussed. An intelligent decision support system (IDSS) model based on such knowledge representation and inference mechanism was developed by domain engineers. The model was verified to have a small kernel and powerful capability in list processing and data driving, which was successfully used in the design of a cooling/heating sources system for a large-sized office building.

  19. Environmental Sustainability and Effects on Urban Micro Region using Agent-Based Modeling of Urbanisation in Select Major Indian Cities

    Science.gov (United States)

    Aithal, B. H.

    2015-12-01

    Abstract: Urbanisation has gained momentum with globalization in India. Policy decisions to set up commercial, industrial hubs have fuelled large scale migration, added with population upsurge has contributed to the fast growing urban region that needs to be monitored in order to design sustainable urban cities. Unplanned urbanization have resulted in the growth of peri-urban region referred to as urban sprawl, are often devoid of basic amenities and infrastructure leading to large scale environmental problems that are evident. Remote sensing data acquired through space borne sensors at regular interval helps in understanding urban dynamics aided by Geoinformatics which has proved very effective in mapping and monitoring for sustainable urban planning. Cellular automata (CA) is a robust approach for the spatially explicit simulation of land-use land cover dynamics. CA uses rules, states, conditions that are vital factors in modelling urbanisation. This communication effectively introduces simulation assistances of CA with the agent based modelling supported by its fuzzy characteristics and weightages through analytical hierarchal process (AHP). This has been done considering perceived agents such as industries, natural resource etc. Respective agent's role in development of a particular regions into an urban area has been examined with weights and its influence of each of these agents based on its characteristics functions. Validation was performed obtaining a high kappa coefficient indicating the quality and the allocation performance of the model & validity of the model to predict future projections. The prediction using the proposed model was performed for 2030. Further environmental sustainability of each of these cities are explored such as water features, environment, greenhouse gas emissions, effects on human human health etc., Modeling suggests trend of various land use classes transformation with the spurt in urban expansions based on specific regions and

  20. Supplier selection an MCDA-based approach

    CERN Document Server

    Mukherjee, Krishnendu

    2017-01-01

    The purpose of this book is to present a comprehensive review of the latest research and development trends at the international level for modeling and optimization of the supplier selection process for different industrial sectors. It is targeted to serve two audiences: the MBA and PhD student interested in procurement, and the practitioner who wishes to gain a deeper understanding of procurement analysis with multi-criteria based decision tools to avoid upstream risks to get better supply chain visibility. The book is expected to serve as a ready reference for supplier selection criteria and various multi-criteria based supplier’s evaluation methods for forward, reverse and mass customized supply chain. This book encompasses several criteria, methods for supplier selection in a systematic way based on extensive literature review from 1998 to 2012. It provides several case studies and some useful links which can serve as a starting point for interested researchers. In the appendix several computer code wri...

  1. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  2. Experimental characterization and modeling of a nanofiber-based selective emitter for thermophotovoltaic energy conversion: The effect of optical properties

    Science.gov (United States)

    Aljarrah, M. T.; Wang, R.; Evans, E. A.; Clemons, C. B.; Young, G. W.

    2011-02-01

    Aluminum oxide nanofibers doped with erbium oxide have been synthesized by calcining polymer fibers made by the electrospinning technique using a mixture of aluminum acetate, erbium acetate and polyvinylpyrrolidone dissolved in ethanol. The resulting ceramic fibers are used to fabricate a free-standing selective emitter. The general equation of radiation transfer coupled with experimentally measured optical properties is used to model the net radiation obtained from these structures. It has been found that the index of refraction and the extinction coefficient are direct functions of the erbia doping level in the fibers. The fibers radiated in a selective manner at ˜1.53 μm with an efficiency of about 90%. For a fiber film on a substrate, the effect of film thickness, extinction coefficient and substrate emissivity on the overall emitter emissivity is also investigated in this study. Results show that the emissivity of the film increases as the thickness of the film increases up to a maximum value, after which increasing the film thickness had no effect on emissivity. Furthermore, it has been found that the substrate emissivity increases the amount of off-band radiation. This effect can be mitigated by controlling the film thickness.

  3. Relay Selection Based Double-Differential Transmission for Cooperative Networks with Multiple Carrier Frequency Offsets: Model, Analysis, and Optimization

    Science.gov (United States)

    Zhao, Kun; Zhang, Bangning; Pan, Kegang; Liu, Aijun; Guo, Daoxing

    2014-07-01

    Due to the distributed nature, cooperative networks are generally subject to multiple carrier frequency offsets (MCFOs), which make the channels time-varying and drastically degrade the system performance. In this paper, to address the MCFOs problem in detect-andforward (DetF) multi-relay cooperative networks, a robust relay selection (RS) based double-differential (DD) transmission scheme, termed RSDDT, is proposed, where the best relay is selected to forward the source's double-differentially modulated signals to the destination with the DetF protocol. The proposed RSDDT scheme can achieve excellent performance over fading channels in the presence of unknown MCFOs. Considering double-differential multiple phase-shift keying (DDMPSK) is applied, we first derive exact expressions for the outage probability and average bit error rate (BER) of the RSDDT scheme. Then, we look into the high signal-to-noise ratio (SNR) regime and present simple and informative asymptotic outage probability and average BER expressions, which reveal that the proposed scheme can achieve full diversity. Moreover, to further improve the BER performance of the RSDDT scheme, we investigate the optimum power allocation strategy among the source and the relay nodes, and simple analytical solutions are obtained. Numerical results are provided to corroborate the derived analytical expressions and it is demonstrated that the proposed optimum power allocation strategy offers substantial BER performance improvement over the equal power allocation strategy.

  4. Model selection for pion photoproduction

    Science.gov (United States)

    Landay, J.; Döring, M.; Fernández-Ramírez, C.; Hu, B.; Molina, R.

    2017-01-01

    Partial-wave analysis of meson and photon-induced reactions is needed to enable the comparison of many theoretical approaches to data. In both energy-dependent and independent parametrizations of partial waves, the selection of the model amplitude is crucial. Principles of the S matrix are implemented to a different degree in different approaches; but a many times overlooked aspect concerns the selection of undetermined coefficients and functional forms for fitting, leading to a minimal yet sufficient parametrization. We present an analysis of low-energy neutral pion photoproduction using the least absolute shrinkage and selection operator (LASSO) in combination with criteria from information theory and K -fold cross validation. These methods are not yet widely known in the analysis of excited hadrons but will become relevant in the era of precision spectroscopy. The principle is first illustrated with synthetic data; then, its feasibility for real data is demonstrated by analyzing the latest available measurements of differential cross sections (d σ /d Ω ), photon-beam asymmetries (Σ ), and target asymmetry differential cross sections (d σT/d ≡T d σ /d Ω ) in the low-energy regime.

  5. Selection of Polynomial Chaos Bases via Bayesian Model Uncertainty Methods with Applications to Sparse Approximation of PDEs with Stochastic Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Karagiannis, Georgios; Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.

  6. An Integrated MCDM Model for Conveyor Equipment Evaluation and Selection in an FMC Based on a Fuzzy AHP and Fuzzy ARAS in the Presence of Vagueness.

    Science.gov (United States)

    Nguyen, Huu-Tho; Dawal, Siti Zawiah Md; Nukman, Yusoff; Rifai, Achmad P; Aoyama, Hideki

    2016-01-01

    The conveyor system plays a vital role in improving the performance of flexible manufacturing cells (FMCs). The conveyor selection problem involves the evaluation of a set of potential alternatives based on qualitative and quantitative criteria. This paper presents an integrated multi-criteria decision making (MCDM) model of a fuzzy AHP (analytic hierarchy process) and fuzzy ARAS (additive ratio assessment) for conveyor evaluation and selection. In this model, linguistic terms represented as triangular fuzzy numbers are used to quantify experts' uncertain assessments of alternatives with respect to the criteria. The fuzzy set is then integrated into the AHP to determine the weights of the criteria. Finally, a fuzzy ARAS is used to calculate the weights of the alternatives. To demonstrate the effectiveness of the proposed model, a case study is performed of a practical example, and the results obtained demonstrate practical potential for the implementation of FMCs.

  7. In Silico Modeling-based Identification of Glucose Transporter 4 (GLUT4)-selective Inhibitors for Cancer Therapy.

    Science.gov (United States)

    Mishra, Rama K; Wei, Changyong; Hresko, Richard C; Bajpai, Richa; Heitmeier, Monique; Matulis, Shannon M; Nooka, Ajay K; Rosen, Steven T; Hruz, Paul W; Schiltz, Gary E; Shanmugam, Mala

    2015-06-05

    Tumor cells rely on elevated glucose consumption and metabolism for survival and proliferation. Glucose transporters mediating glucose entry are key proximal rate-limiting checkpoints. Unlike GLUT1 that is highly expressed in cancer and more ubiquitously expressed in normal tissues, GLUT4 exhibits more limited normal expression profiles. We have previously determined that insulin-responsive GLUT4 is constitutively localized on the plasma membrane of myeloma cells. Consequently, suppression of GLUT4 or inhibition of glucose transport with the HIV protease inhibitor ritonavir elicited growth arrest and/or apoptosis in multiple myeloma. GLUT4 inhibition also caused sensitization to metformin in multiple myeloma and chronic lymphocytic leukemia and a number of solid tumors suggesting the broader therapeutic utility of targeting GLUT4. This study sought to identify selective inhibitors of GLUT4 to develop a more potent cancer chemotherapeutic with fewer potential off-target effects. Recently, the crystal structure of GLUT1 in an inward open conformation was reported. Although this is an important achievement, a full understanding of the structural biology of facilitative glucose transport remains elusive. To date, there is no three-dimensional structure for GLUT4. We have generated a homology model for GLUT4 that we utilized to screen for drug-like compounds from a library of 18 million compounds. Despite 68% homology between GLUT1 and GLUT4, our virtual screen identified two potent compounds that were shown to target GLUT4 preferentially over GLUT1 and block glucose transport. Our results strongly bolster the utility of developing GLUT4-selective inhibitors as anti-cancer therapeutics.

  8. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; van den Berg, Stéphanie Martine

    2017-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  9. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.

    2016-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  10. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.

    2017-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  11. Adverse selection model regarding tobacco consumption

    Directory of Open Access Journals (Sweden)

    Dumitru MARIN

    2006-01-01

    Full Text Available The impact of introducing a tax on tobacco consumption can be studied trough an adverse selection model. The objective of the model presented in the following is to characterize the optimal contractual relationship between the governmental authorities and the two type employees: smokers and non-smokers, taking into account that the consumers’ decision to smoke or not represents an element of risk and uncertainty. Two scenarios are run using the General Algebraic Modeling Systems software: one without taxes set on tobacco consumption and another one with taxes set on tobacco consumption, based on an adverse selection model described previously. The results of the two scenarios are compared in the end of the paper: the wage earnings levels and the social welfare in case of a smoking agent and in case of a non-smoking agent.

  12. New Linear Partitioning Models based on Experimental Water – Supercritical CO2 Partitioning Data of Selected Organic Compounds

    Energy Technology Data Exchange (ETDEWEB)

    Burant, Aniela S.; Thompson, Christopher J.; Lowry, Gregory; Karamalidis, Athanasios

    2016-05-17

    Partitioning coefficients of organic compounds between water and supercritical CO2 (sc-CO2) are necessary to assess the risk of migration of these chemicals from subsurface CO2 storage sites. Despite the large number of potential organic contaminants, the current data set of published water-sc-CO2 partitioning coefficients is very limited. Here, the partitioning coefficients of thiophene, pyrrole, and anisole were measured in situ over a range of temperatures and pressures using a novel pressurized batch reactor system with dual spectroscopic detectors: a near infrared spectrometer for measuring the organic analyte in the CO2 phase, and a UV detector for quantifying the analyte in the aqueous phase. Our measured partitioning coefficients followed expected trends based on volatility and aqueous solubility. The partitioning coefficients and literature data were then used to update a published poly-parameter linear free energy relationship and to develop five new linear free energy relationships for predicting water-sc-CO2 partitioning coefficients. Four of the models targeted a single class of organic compounds. Unlike models that utilize Abraham solvation parameters, the new relationships use vapor pressure and aqueous solubility of the organic compound at 25 °C and CO2 density to predict partitioning coefficients over a range of temperature and pressure conditions. The compound class models provide better estimates of partitioning behavior for compounds in that class than the model built for the entire dataset.

  13. Portfolio Selection Model with Derivative Securities

    Institute of Scientific and Technical Information of China (English)

    王春峰; 杨建林; 蒋祥林

    2003-01-01

    Traditional portfolio theory assumes that the return rate of portfolio follows normality. However, this assumption is not true when derivative assets are incorporated. In this paper a portfolio selection model is developed based on utility function which can capture asymmetries in random variable distributions. Other realistic conditions are also considered, such as liabilities and integer decision variables. Since the resulting model is a complex mixed-integer nonlinear programming problem, simulated annealing algorithm is applied for its solution. A numerical example is given and sensitivity analysis is conducted for the model.

  14. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  15. Emissions and behaviour of selected persistent organic pollutants (POPs) in the Northern environment. Part I: Development of the cycling model and emission and environmental data bases

    Energy Technology Data Exchange (ETDEWEB)

    Pacyna, J.M.; Wania, F. [Norsk Inst. for Luftforskning, Kjeller (Norway); Brorstroem-Lunden, E. [Swedish Environmental Research Inst., Goeteborg (Sweden); Paasivirta, J. [Jyvaeskylae Univ. (Finland); Runge, E.

    1996-03-01

    The overall goal of the reported project was to improve the knowledge on the inputs, transport, and migration of selected POPs (Persistent Organic Pollutants) in the Northern environments. A comprehensive steady-state mass balance model, so-called B-POP model, has been developed within the project. This model is based on the fugacity approach and comprises a number of compartments which are assumed to have homogeneous environmental conditions and chemical concentrations. Mass balance equations for all compartments were formulated and solved for the chemical fugacity in each compartment. The B-POP model was then tested to describe the migration of PCBs (Polychlorinated Biphenyls). It was concluded that the model succeeds in outlining the general picture of PCB behaviour in the Baltic Sea by identifying the major environmental pathways and reservoirs within the system. 36 refs., 6 figs., 10 tabs.

  16. Modeling the Time-Course of Responses for the Border Ownership Selectivity Based on the Integration of Feedforward Signals and Visual Cortical Interactions

    Science.gov (United States)

    Wagatsuma, Nobuhiko; Sakai, Ko

    2017-01-01

    modulations for time-courses were induced by selective enhancement of early-level features due to interactions between V1 and PP. Our proposed model suggests fundamental roles of surrounding suppression/facilitation based on feedforward inputs as well as the interactions between early and parietal visual areas with respect to the ambiguity dependence of the neural dynamics in intermediate-level vision. PMID:28163688

  17. Petri net-based approach to modeling and analysis of selected aspects of the molecular regulation of angiogenesis

    Science.gov (United States)

    Formanowicz, Dorota; Radom, Marcin; Zawierucha, Piotr; Formanowicz, Piotr

    2017-01-01

    The functioning of both normal and pathological tissues depends on an adequate supply of oxygen through the blood vessels. A process called angiogenesis, in which new endothelial cells and smooth muscles interact with each other, forming new blood vessels either from the existing ones or from a primary vascular plexus, is particularly important and interesting, due to new therapeutic possibilities it offers. This is a multi-step and very complex process, so an accurate understanding of the underlying mechanisms is a significant task, especially in recent years, with the constantly increasing amount of new data that must be taken into account. A systems approach is necessary for these studies because it is not sufficient to analyze the properties of the building blocks separately and an analysis of the whole network of interactions is essential. This approach is based on building a mathematical model of the system, while the model is expressed in the formal language of a mathematical theory. Recently, the theory of Petri nets was shown to be especially promising for the modeling and analysis of biological phenomena. This analysis, based mainly on t-invariants, has led to a particularly important finding that a direct link (close connection) exist between transforming growth factor β1 (TGF-β1), endothelial nitric oxide synthase (eNOS), nitric oxide (NO), and hypoxia-inducible factor 1, the molecules that play a crucial roles during angiogenesis. We have shown that TGF-β1 may participate in the inhibition of angiogenesis through the upregulation of eNOS expression, which is responsible for catalyzing NO production. The results obtained in the previous studies, concerning the effects of NO on angiogenesis, have not been conclusive, and therefore, our study may contribute to a better understanding of this phenomenon. PMID:28253310

  18. Tracking Models for Optioned Portfolio Selection

    Science.gov (United States)

    Liang, Jianfeng

    In this paper we study a target tracking problem for the portfolio selection involving options. In particular, the portfolio in question contains a stock index and some European style options on the index. A refined tracking-error-variance methodology is adopted to formulate this problem as a multi-stage optimization model. We derive the optimal solutions based on stochastic programming and optimality conditions. Attention is paid to the structure of the optimal payoff function, which is shown to possess rich properties.

  19. Model selection for radiochromic film dosimetry

    CERN Document Server

    Méndez, Ignasi

    2015-01-01

    The purpose of this study was to find the most accurate model for radiochromic film dosimetry by comparing different channel independent perturbation models. A model selection approach based on (algorithmic) information theory was followed, and the results were validated using gamma-index analysis on a set of benchmark test cases. Several questions were addressed: (a) whether incorporating the information of the non-irradiated film, by scanning prior to irradiation, improves the results; (b) whether lateral corrections are necessary when using multichannel models; (c) whether multichannel dosimetry produces better results than single-channel dosimetry; (d) which multichannel perturbation model provides more accurate film doses. It was found that scanning prior to irradiation and applying lateral corrections improved the accuracy of the results. For some perturbation models, increasing the number of color channels did not result in more accurate film doses. Employing Truncated Normal perturbations was found to...

  20. Molecular modeling of the human P2Y14 receptor: A template for structure-based design of selective agonist ligands.

    Science.gov (United States)

    Trujillo, Kevin; Paoletta, Silvia; Kiselev, Evgeny; Jacobson, Kenneth A

    2015-07-15

    The P2Y14 receptor (P2Y14R) is a Gi protein-coupled receptor that is activated by uracil nucleotides UDP and UDP-glucose. The P2Y14R structure has yet to be solved through X-ray crystallography, but the recent agonist-bound crystal structure of the P2Y12R provides a potentially suitable template for its homology modeling for rational structure-based design of selective and high-affinity ligands. In this study, we applied ligand docking and molecular dynamics refinement to a P2Y14R homology model to qualitatively explain structure-activity relationships of previously published synthetic nucleotide analogues and to probe the quality of P2Y14R homology modeling as a template for structure-based design. The P2Y14R model supports the hypothesis of a conserved binding mode of nucleotides in the three P2Y12-like receptors involving functionally conserved residues. We predict phosphate group interactions with R253(6.55), K277(7.35), Y256(6.58) and Q260(6.62), nucleobase (anti-conformation) π-π stacking with Y102(3.33) and the role of F191(5.42) as a means for selectivity among P2Y12-like receptors. The glucose moiety of UDP-glucose docked in a secondary subpocket at the P2Y14R homology model. Thus, P2Y14R homology modeling may allow detailed prediction of interactions to facilitate the design of high affinity, selective agonists as pharmacological tools to study the P2Y14R.

  1. Selective Maintenance Model Considering Time Uncertainty

    OpenAIRE

    Le Chen; Zhengping Shu; Yuan Li; Xuezhi Lv

    2012-01-01

    This study proposes a selective maintenance model for weapon system during mission interval. First, it gives relevant definitions and operational process of material support system. Then, it introduces current research on selective maintenance modeling. Finally, it establishes numerical model for selecting corrective and preventive maintenance tasks, considering time uncertainty brought by unpredictability of maintenance procedure, indetermination of downtime for spares and difference of skil...

  2. The Talent Selecting Decision Model Based on Intuitional Fuzzy Set%基于直觉模糊集的人才选优决策模型

    Institute of Scientific and Technical Information of China (English)

    骆达荣; 秦华妮

    2012-01-01

    人才选优是人才招聘、晋升、提拔等人事工作的重要环节.文章回顾、分析现有人才选优工作中不足的地方,针对人才选优工作中信息的不确定性和模糊性,采用直觉模糊集表示人才属性决策信息,再结合人岗匹配原则考虑岗位任职能力的偏好,提出基于组合赋权的直觉模糊集人才选优决策模型,最后以五邑大学教师岗位的招聘实例分析说明模型的有效性.%The talent selecting is an essential program of recruitment, promotion, elevation, etc. The deficiencies in the current situation of talent selecting are pointed out in this paper. Firstly, because of the fuzziness and uncertainty of the information in the talent selecting, the intuitional fuzzy sets are used in making decision. Secondly, according to the principle of person - job fit, considering the requirement of position, a model of talent selecting decision is obtained, where the subjective and objective factors based on the intuitional fuzzy sets are considered sufficiently. At last, an example of recruitment for teacher in our university shows the effectiveness of the model.

  3. Development of Physics-Based Numerical Models for Uncertainty Quantification of Selective Laser Melting Processes - 2015 Annual Progress Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Delplanque, Jean-Pierre [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-08

    The primary goal of the proposed research is to characterize the influence of process parameter variability inherent to Selective Laser Melting (SLM) on components manufactured with the SLM technique for space flight systems and their performance.

  4. The Ouroboros Model, selected facets.

    Science.gov (United States)

    Thomsen, Knud

    2011-01-01

    The Ouroboros Model features a biologically inspired cognitive architecture. At its core lies a self-referential recursive process with alternating phases of data acquisition and evaluation. Memory entries are organized in schemata. The activation at a time of part of a schema biases the whole structure and, in particular, missing features, thus triggering expectations. An iterative recursive monitor process termed 'consumption analysis' is then checking how well such expectations fit with successive activations. Mismatches between anticipations based on previous experience and actual current data are highlighted and used for controlling the allocation of attention. A measure for the goodness of fit provides feedback as (self-) monitoring signal. The basic algorithm works for goal directed movements and memory search as well as during abstract reasoning. It is sketched how the Ouroboros Model can shed light on characteristics of human behavior including attention, emotions, priming, masking, learning, sleep and consciousness.

  5. Skewed factor models using selection mechanisms

    KAUST Repository

    Kim, Hyoung-Moon

    2015-12-21

    Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.

  6. Novel rat Alzheimer's disease models based on AAV-mediated gene transfer to selectively increase hippocampal Aβ levels

    Directory of Open Access Journals (Sweden)

    Dicker Bridget L

    2007-06-01

    Full Text Available Abstract Background Alzheimer's disease (AD is characterized by a decline in cognitive function and accumulation of amyloid-β peptide (Aβ in extracellular plaques. Mutations in amyloid precursor protein (APP and presenilins alter APP metabolism resulting in accumulation of Aβ42, a peptide essential for the formation of amyloid deposits and proposed to initiate the cascade leading to AD. However, the role of Aβ40, the more prevalent Aβ peptide secreted by cells and a major component of cerebral Aβ deposits, is less clear. In this study, virally-mediated gene transfer was used to selectively increase hippocampal levels of human Aβ42 and Aβ40 in adult Wistar rats, allowing examination of the contribution of each to the cognitive deficits and pathology seen in AD. Results Adeno-associated viral (AAV vectors encoding BRI-Aβ cDNAs were generated resulting in high-level hippocampal expression and secretion of the specific encoded Aβ peptide. As a comparison the effect of AAV-mediated overexpression of APPsw was also examined. Animals were tested for development of learning and memory deficits (open field, Morris water maze, passive avoidance, novel object recognition three months after infusion of AAV. A range of impairments was found, with the most pronounced deficits observed in animals co-injected with both AAV-BRI-Aβ40 and AAV-BRI-Aβ42. Brain tissue was analyzed by ELISA and immunohistochemistry to quantify levels of detergent soluble and insoluble Aβ peptides. BRI-Aβ42 and the combination of BRI-Aβ40+42 overexpression resulted in elevated levels of detergent-insoluble Aβ. No significant increase in detergent-insoluble Aβ was seen in the rats expressing APPsw or BRI-Aβ40. No pathological features were noted in any rats, except the AAV-BRI-Aβ42 rats which showed focal, amorphous, Thioflavin-negative Aβ42 deposits. Conclusion The results show that AAV-mediated gene transfer is a valuable tool to model aspects of AD pathology in

  7. A scenario based project portfolio selection

    Directory of Open Access Journals (Sweden)

    Kamran Pourahmadi

    2015-09-01

    Full Text Available One of the primary assumptions in many project portfolio selection is the availability of all parameters. However, in real-world cases, many parameters are under uncertainty and the exact values are unknown in advance. This paper presents a scenario based mathematical model for project portfolio selection when parameters are under uncertainty. The problem considers two objective functions where the first one maximizes the net present value while the second objective function is the minimization of the positive deviations from the allocation of resources. The second objective function is looking for project resource leveling. The resulted model is formulated as mixed integer programming and the problem is analyzed under different conditions.

  8. A new kinetic model based on the remote control mechanism to fit experimental data in the selective oxidation of propene into acrolein on biphasic catalysts

    Energy Technology Data Exchange (ETDEWEB)

    Abdeldayem, H.M.; Ruiz, P.; Delmon, B. [Unite de Catalyse et Chimie des Materiaux Divises, Universite Catholique de Louvain, Louvain-La-Neuve (Belgium); Thyrion, F.C. [Unite des Procedes Faculte des Sciences Appliquees, Universite Catholique de Louvain, Louvain-La-Neuve (Belgium)

    1998-12-31

    A new kinetic model for a more accurate and detailed fitting of the experimental data is proposed. The model is based on the remote control mechanism (RCM). The RCM assumes that some oxides (called `donors`) are able to activate molecular oxygen transforming it to very active mobile species (spillover oxygen (O{sub OS})). O{sub OS} migrates onto the surface of the other oxide (called `acceptor`) where it creates and/or regenerates the active sites during the reaction. The model contains tow terms, one considering the creation of selective sites and the other the catalytic reaction at each site. The model has been tested in the selective oxidation of propene into acrolein (T=380, 400, 420 C; oxygen and propene partial pressures between 38 and 152 Torr). Catalysts were prepared as pure MoO{sub 3} (acceptor) and their mechanical mixtures with {alpha}-Sb{sub 2}O{sub 4} (donor) in different proportions. The presence of {alpha}-Sb{sub 2}O{sub 4} changes the reaction order, the activation energy of the reaction and the number of active sites of MoO{sub 3} produced by oxygen spillover. These changes are consistent with a modification in the degree of irrigation of the surface by oxygen spillover. The fitting of the model to experimental results shows that the number of sites created by O{sub SO} increases with the amount of {alpha}-Sb{sub 2}O{sub 4}. (orig.)

  9. Behavioral optimization models for multicriteria portfolio selection

    Directory of Open Access Journals (Sweden)

    Mehlawat Mukesh Kumar

    2013-01-01

    Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.

  10. Model selection and comparison for independents sinusoids

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2014-01-01

    this method by considering the problem in a full Bayesian framework instead of the approximate formulation, on which the asymptotic MAP criterion is based. This leads to a new model selection and comparison method, the lp-BIC, whose computational complexity is of the same order as the asymptotic MAP criterion......In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve....... Through simulations, we demonstrate that the lp-BIC outperforms the asymptotic MAP criterion and other state of the art methods in terms of model selection, de-noising and prediction performance. The simulation code is available online....

  11. Ant colony optimization as a descriptor selection in QSPR modeling: Estimation of the λmax of anthraquinones-based dyes

    Directory of Open Access Journals (Sweden)

    Morteza Atabati

    2016-09-01

    Full Text Available Quantitative structure–property relationship (QSPR studies based on ant colony optimization (ACO were carried out for the prediction of λmax of 9,10-anthraquinone derivatives. ACO is a meta-heuristic algorithm, which is derived from the observation of real ants and proposed to feature selection. After optimization of 3D geometry of structures by the semi-empirical quantum-chemical calculation at AM1 level, different descriptors were calculated by the HyperChem and Dragon softwares (1514 descriptors. A major problem of QSPR is the high dimensionality of the descriptor space; therefore, descriptor selection is the most important step. In this paper, an ACO algorithm was used to select the best descriptors. Then selected descriptors were applied for model development using multiple linear regression. The average absolute relative deviation and correlation coefficient for the calibration set were obtained as 3.3% and 0.9591, respectively, while the average absolute relative deviation and correlation coefficient for the prediction set were obtained as 5.0% and 0.9526, respectively. The results showed that the applied procedure is suitable for prediction of λmax of 9,10-anthraquinone derivatives.

  12. Bayesian Constrained-Model Selection for Factor Analytic Modeling

    OpenAIRE

    Peeters, Carel F.W.

    2016-01-01

    My dissertation revolves around Bayesian approaches towards constrained statistical inference in the factor analysis (FA) model. Two interconnected types of restricted-model selection are considered. These types have a natural connection to selection problems in the exploratory FA (EFA) and confirmatory FA (CFA) model and are termed Type I and Type II model selection. Type I constrained-model selection is taken to mean the determination of the appropriate dimensionality of a model. This type ...

  13. On Model Selection Criteria in Multimodel Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ye, Ming; Meyer, Philip D.; Neuman, Shlomo P.

    2008-03-21

    Hydrologic systems are open and complex, rendering them prone to multiple conceptualizations and mathematical descriptions. There has been a growing tendency to postulate several alternative hydrologic models for a site and use model selection criteria to (a) rank these models, (b) eliminate some of them and/or (c) weigh and average predictions and statistics generated by multiple models. This has led to some debate among hydrogeologists about the merits and demerits of common model selection (also known as model discrimination or information) criteria such as AIC [Akaike, 1974], AICc [Hurvich and Tsai, 1989], BIC [Schwartz, 1978] and KIC [Kashyap, 1982] and some lack of clarity about the proper interpretation and mathematical representation of each criterion. In particular, whereas we [Neuman, 2003; Ye et al., 2004, 2005; Meyer et al., 2007] have based our approach to multimodel hydrologic ranking and inference on the Bayesian criterion KIC (which reduces asymptotically to BIC), Poeter and Anderson [2005] and Poeter and Hill [2007] have voiced a preference for the information-theoretic criterion AICc (which reduces asymptotically to AIC). Their preference stems in part from a perception that KIC and BIC require a "true" or "quasi-true" model to be in the set of alternatives while AIC and AICc are free of such an unreasonable requirement. We examine the model selection literature to find that (a) all published rigorous derivations of AIC and AICc require that the (true) model having generated the observational data be in the set of candidate models; (b) though BIC and KIC were originally derived by assuming that such a model is in the set, BIC has been rederived by Cavanaugh and Neath [1999] without the need for such an assumption; (c) KIC reduces to BIC as the number of observations becomes large relative to the number of adjustable model parameters, implying that it likewise does not require the existence of a true model in the set of alternatives; (d) if a true

  14. Selection and impedance based model of a lithium ion battery technology for integration with virtual power plant

    DEFF Research Database (Denmark)

    Swierczynski, Maciej Jozef; Stroe, Daniel Ioan; Stan, Ana-Irina;

    2013-01-01

    is to integrate lithium-ion batteries into virtual power plants; thus, the power system stability and the energy quality can be increased. The selection of the best lithium-ion battery candidate for integration with wind power plants is a key aspect for the economic feasibility of the virtual power plant......The penetration of wind power into the power system has been increasing in the recent years. Therefore, a lot of concerns related to the reliable operation of the power system have been addressed. An attractive solution to minimize the limitations faced by the wind power grid integration...

  15. A Selection Model to Logistic Centers Based on TOPSIS and MCGP Methods: The Case of Airline Industry

    Directory of Open Access Journals (Sweden)

    Kou-Huang Chen

    2014-01-01

    Full Text Available The location selection of a logistics center is a crucial decision relating to cost and benefit analysis in airline industry. However, it is difficult to be solved because there are many conflicting and multiple objectives in location problems. To solve the problem, this paper integrates fuzzy technique for order preference by similarity to an ideal solution (TOPSIS and multichoice goal programming (MCGP to obtain an appropriate logistics center from many alternative locations for airline industry. The proposed method in this paper will offer the decision makers (DMs to set multiple aspiration levels for the decision criteria. A numerical example of application is also presented.

  16. Selected Logistics Models and Techniques.

    Science.gov (United States)

    1984-09-01

    ACCESS PROCEDURE: On-Line System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease...System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease arrangement. • SPONSOR: ASD/ACCC

  17. Studies of benzamide- and thiol-based histone deacetylase inhibitors in models of oxidative-stress-induced neuronal death: identification of some HDAC3-selective inhibitors.

    Science.gov (United States)

    Chen, Yufeng; He, Rong; Chen, Yihua; D'Annibale, Melissa A; Langley, Brett; Kozikowski, Alan P

    2009-05-01

    We compare three structurally different classes of histone deacetylase (HDAC) inhibitors that contain benzamide, hydroxamate, or thiol groups as the zinc binding group (ZBG) for their ability to protect cortical neurons in culture from cell death induced by oxidative stress. This study reveals that none of the benzamide-based HDAC inhibitors (HDACIs) provides any neuroprotection whatsoever, in distinct contrast to HDACIs that contain other ZBGs. Some of the sulfur-containing HDACIs, namely the thiols, thioesters, and disulfides present modest neuroprotective activity but show toxicity at higher concentrations. Taken together, these data demonstrate that the HDAC6-selective mercaptoacetamides that were reported previously provide the best protection in the homocysteic acid model of oxidative stress, thus further supporting their study in animal models of neurodegenerative diseases.

  18. Rationalizing the selection of oral lipid based drug delivery systems by an in vitro dynamic lipolysis model for improved oral bioavailability of poorly water soluble drugs.

    Science.gov (United States)

    Dahan, Arik; Hoffman, Amnon

    2008-07-02

    As a consequence of modern drug discovery techniques, there has been a consistent increase in the number of new pharmacologically active lipophilic compounds that are poorly water soluble. A great challenge facing the pharmaceutical scientist is making these molecules into orally administered medications with sufficient bioavailability. One of the most popular approaches to improve the oral bioavailability of these molecules is the utilization of a lipid based drug delivery system. Unfortunately, current development strategies in the area of lipid based delivery systems are mostly empirical. Hence, there is a need for a simplified in vitro method to guide the selection of a suitable lipidic vehicle composition and to rationalize the delivery system design. To address this need, a dynamic in vitro lipolysis model, which provides a very good simulation of the in vivo lipid digestion process, has been developed over the past few years. This model has been extensively used for in vitro assessment of different lipid based delivery systems, leading to enhanced understanding of the suitability of different lipids and surfactants as a delivery system for a given poorly water soluble drug candidate. A key goal in the development of the dynamic in vitro lipolysis model has been correlating the in vitro data of various drug-lipidic delivery system combinations to the resultant in vivo drug profile. In this paper, we discuss and review the need for this model, its underlying theory, practice and limitations, and the available data accumulated in the literature. Overall, the dynamic in vitro lipolysis model seems to provide highly useful initial guidelines in the development process of oral lipid based drug delivery systems for poorly water soluble drugs, and it predicts phenomena that occur in the pre-enterocyte stages of the intestinal absorption cascade.

  19. Information criteria for astrophysical model selection

    CERN Document Server

    Liddle, A R

    2007-01-01

    Model selection is the problem of distinguishing competing models, perhaps featuring different numbers of parameters. The statistics literature contains two distinct sets of tools, those based on information theory such as the Akaike Information Criterion (AIC), and those on Bayesian inference such as the Bayesian evidence and Bayesian Information Criterion (BIC). The Deviance Information Criterion combines ideas from both heritages; it is readily computed from Monte Carlo posterior samples and, unlike the AIC and BIC, allows for parameter degeneracy. I describe the properties of the information criteria, and as an example compute them from WMAP3 data for several cosmological models. I find that at present the information theory and Bayesian approaches give significantly different conclusions from that data.

  20. Multiple receptor-ligand based pharmacophore modeling and molecular docking to screen the selective inhibitors of matrix metalloproteinase-9 from natural products

    Science.gov (United States)

    Gao, Qi; Wang, Yijun; Hou, Jiaying; Yao, Qizheng; Zhang, Ji

    2017-07-01

    Matrix metalloproteinase-9 (MMP-9) is an attractive target for cancer therapy. In this study, the pharmacophore model of MMP-9 inhibitors is built based on the experimental binding structures of multiple receptor-ligand complexes. It is found that the pharmacophore model consists of six chemical features, including two hydrogen bond acceptors, one hydrogen bond donor, one ring aromatic regions, and two hydrophobic (HY) features. Among them, the two HY features are especially important because they can enter the S1' pocket of MMP-9 which determines the selectivity of MMP-9 inhibitors. The reliability of pharmacophore model is validated based on the two different decoy sets and relevant experimental data. The virtual screening, combining pharmacophore model with molecular docking, is performed to identify the selective MMP-9 inhibitors from a database of natural products. The four novel MMP-9 inhibitors of natural products, NP-000686, NP-001752, NP-014331, and NP-015905, are found; one of them, NP-000686, is used to perform the experiment of in vitro bioassay inhibiting MMP-9, and the IC50 value was estimated to be only 13.4 µM, showing the strongly inhibitory activity of NP-000686 against MMP-9, which suggests that our screening results should be reliable. The binding modes of screened inhibitors with MMP-9 active sites were discussed. In addition, the ADMET properties and physicochemical properties of screened four compounds were assessed. The found MMP-9 inhibitors of natural products could serve as the lead compounds for designing the new MMP-9 inhibitors by carrying out structural modifications in the future.

  1. Acetylcholine-Based Entropy in Response Selection: A Model of How Striatal Interneurons Modulate Exploration, Exploitation, and Response Variability in Decision Making

    Directory of Open Access Journals (Sweden)

    Andrea eStocco

    2012-02-01

    Full Text Available The basal ganglia play a fundamental role in decision making. Their contribution is typically modeled within a reinforcement learning framework, with the basal ganglia learning to select the options associated with highest value and their dopamine inputs conveying performance feedback. This basic framework, however, does not account for the role of cholinergic interneurons in the striatum, and does not easily explain certain dynamic aspects of decision-making and skill acquisition like the generation of exploratory actions. This paper describes BABE (Basal ganglia Acetylcholine-Based Entropy, a model of the acetylcholine system in the striatum that provides a unified explanation for these phenomena. According to this model, cholinergic interneurons in the striatum control the level of variability in behavior by modulating the number of possible responses that are considered by the basal ganglia, as well as the level of competition between them. This mechanism provides a natural way to account for the role of basal ganglia in generating behavioral variability during the acquisition of certain cognitive skills, as well as for modulating exploration and exploitation in decision making. Compared to a typical reinforcement learning model, BABE showed a greater modulation of response variability in the face of changes in the reward contingencies, allowing for faster learning (and re-learning of option values. Finally, the paper discusses the possible applications of the model to other domains.

  2. A mixture model with a reference-based automatic selection of components for disease classification from protein and/or gene expression levels

    Directory of Open Access Journals (Sweden)

    Kopriva Ivica

    2011-12-01

    Full Text Available Abstract Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%, 97.6% (sd = 2.8% and 90.8% (sd = 5.5% and average specificities of: 93.6% (sd = 4.1%, 99% (sd = 2.2% and 79.4% (sd = 9.8% in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information as control specific, case specific and not differentially expressed (neutral. The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method

  3. Research on PDM Selection Model Based on QFD%基于 QFD 的 PDM 选型模型研究

    Institute of Scientific and Technical Information of China (English)

    王建正; 王莹; 魏双庆

    2014-01-01

    Using PDM system can improve the product development and design ability,and promote the overall competitiveness of enterprises,but the implementation of the PDM is complicated system engineering.Selecting appropriate PDM software is a prerequisite and key to PDM implementation in enterprise.On the Basis analyzing of the PDM selection steps,this paper puts forward the PDM selection model based on QFD.According to the enterprise’s demand on PDM,it determines the PDM requirement items,PDM system characteristics,PDM vendors,and the relationship between them.Determining the PDM demands weight,PDM system features weight and the suppliers competitiveness index respectively by using fuzzy AHP,independent point collocation method and Likert scale,the PDM selection model based on QFD is established. And the PDM application example of the X company is provided to testify the model.%应用 PDM 系统能够提高产品开发设计能力,提升企业整体竞争能力,但 PDM 的实施是一项复杂的系统工程。其中,选择一个合适的 PDM 软件是企业成功实施 PDM 的前提和关键。在分析PDM 选型步骤的基础上,提出了一种基于 QFD 的 PDM 选型模型。从企业应用 PDM 需求分析出发,确定 PDM 需求项、PDM 系统特性、PDM 供应商以及它们之间的相关关系,运用模糊层次分析法、独立配点法,以及李克特五级评分制,分别确定 PDM 需求权重、PDM 系统特性权重及供应商技术竞争能力指数等,构成了基于 QFD 的 PDM 选型模型。并结合 X 企业 PDM 应用选型实例进行了验证。

  4. 基于信任和个性偏好的云服务选择模型%Cloud service selection model based on trust and personality preferences

    Institute of Scientific and Technical Information of China (English)

    杜瑞忠; 田俊峰; 张焕国

    2013-01-01

    为了从大量的功能相同或相近、但服务质量不同的服务中,选择一个既可信、又能满足个性偏好的服务,以基于Agent和信任域的层次化信任管理框架为平台,利用基于个性偏好的模糊聚类方法,提出云计算环境下基于信任和个性偏好的服务选择模型.为了确定和服务请求者个性偏好最接近的分类,提出服务选择算法.引入信任评估机制,结合直接信任和域推荐信任,使请求者在确定的分类中,选择既安全可信、又能满足个性偏好的服务资源.在交易结束后,根据服务满意度,对本次服务进行评判,并进行信任更新.仿真实验表明,该模型可以有效地提高请求者的服务满意度,对恶意实体的欺诈行为具有一定的抵御能力.%A section model under the cloud computing environment was proposed based on trustworthiness and personality preference in order to select a trusted service which satisfies the personality preference from a lot of service with similar or same functions but different quality of service. In this kind of model, hierarchical trust management architecture is as a platform and this platform is based on agent and trust domain. A service selection algorithm was proposed in order to select the closest classification for the requester's preference. A trust evaluation mechanism was introduced, combined with direct trust and domain recommended trust. Then a service resource was selected among the requester's classification, which is secure and trusted, and can satisfy the requester's personality preference. When the transaction was completed, the service satisfaction was evaluated and the trust degree was updated. Simulation results show that the model can improve the service requesters' satisfaction and has certain resilience to fraud entities.

  5. Mass Spectrometric-Based Selected Reaction Monitoring of Protein Phosphorylation during Symbiotic Signaling in the Model Legume, Medicago truncatula.

    Directory of Open Access Journals (Sweden)

    Lori K Van Ness

    Full Text Available Unlike the major cereal crops corn, rice, and wheat, leguminous plants such as soybean and alfalfa can meet their nitrogen requirement via endosymbiotic associations with soil bacteria. The establishment of this symbiosis is a complex process playing out over several weeks and is facilitated by the exchange of chemical signals between these partners from different kingdoms. Several plant components that are involved in this signaling pathway have been identified, but there is still a great deal of uncertainty regarding the early events in symbiotic signaling, i.e., within the first minutes and hours after the rhizobial signals (Nod factors are perceived at the plant plasma membrane. The presence of several protein kinases in this pathway suggests a mechanism of signal transduction via posttranslational modification of proteins in which phosphate is added to the hydroxyl groups of serine, threonine and tyrosine amino acid side chains. To monitor the phosphorylation dynamics and complement our previous untargeted 'discovery' approach, we report here the results of experiments using a targeted mass spectrometric technique, Selected Reaction Monitoring (SRM that enables the quantification of phosphorylation targets with great sensitivity and precision. Using this approach, we confirm a rapid change in the level of phosphorylation in 4 phosphosites of at least 4 plant phosphoproteins that have not been previously characterized. This detailed analysis reveals aspects of the symbiotic signaling mechanism in legumes that, in the long term, will inform efforts to engineer this nitrogen-fixing symbiosis in important non-legume crops such as rice, wheat and corn.

  6. Mass Spectrometric-Based Selected Reaction Monitoring of Protein Phosphorylation during Symbiotic Signaling in the Model Legume, Medicago truncatula.

    Science.gov (United States)

    Van Ness, Lori K; Jayaraman, Dhileepkumar; Maeda, Junko; Barrett-Wilt, Gregory A; Sussman, Michael R; Ané, Jean-Michel

    2016-01-01

    Unlike the major cereal crops corn, rice, and wheat, leguminous plants such as soybean and alfalfa can meet their nitrogen requirement via endosymbiotic associations with soil bacteria. The establishment of this symbiosis is a complex process playing out over several weeks and is facilitated by the exchange of chemical signals between these partners from different kingdoms. Several plant components that are involved in this signaling pathway have been identified, but there is still a great deal of uncertainty regarding the early events in symbiotic signaling, i.e., within the first minutes and hours after the rhizobial signals (Nod factors) are perceived at the plant plasma membrane. The presence of several protein kinases in this pathway suggests a mechanism of signal transduction via posttranslational modification of proteins in which phosphate is added to the hydroxyl groups of serine, threonine and tyrosine amino acid side chains. To monitor the phosphorylation dynamics and complement our previous untargeted 'discovery' approach, we report here the results of experiments using a targeted mass spectrometric technique, Selected Reaction Monitoring (SRM) that enables the quantification of phosphorylation targets with great sensitivity and precision. Using this approach, we confirm a rapid change in the level of phosphorylation in 4 phosphosites of at least 4 plant phosphoproteins that have not been previously characterized. This detailed analysis reveals aspects of the symbiotic signaling mechanism in legumes that, in the long term, will inform efforts to engineer this nitrogen-fixing symbiosis in important non-legume crops such as rice, wheat and corn.

  7. Fundamental Vocabulary Selection Based on Word Familiarity

    Science.gov (United States)

    Sato, Hiroshi; Kasahara, Kaname; Kanasugi, Tomoko; Amano, Shigeaki

    This paper proposes a new method for selecting fundamental vocabulary. We are presently constructing the Fundamental Vocabulary Knowledge-base of Japanese that contains integrated information on syntax, semantics and pragmatics, for the purposes of advanced natural language processing. This database mainly consists of a lexicon and a treebank: Lexeed (a Japanese Semantic Lexicon) and the Hinoki Treebank. Fundamental vocabulary selection is the first step in the construction of Lexeed. The vocabulary should include sufficient words to describe general concepts for self-expandability, and should not be prohibitively large to construct and maintain. There are two conventional methods for selecting fundamental vocabulary. The first is intuition-based selection by experts. This is the traditional method for making dictionaries. A weak point of this method is that the selection strongly depends on personal intuition. The second is corpus-based selection. This method is superior in objectivity to intuition-based selection, however, it is difficult to compile a sufficiently balanced corpora. We propose a psychologically-motivated selection method that adopts word familiarity as the selection criterion. Word familiarity is a rating that represents the familiarity of a word as a real number ranging from 1 (least familiar) to 7 (most familiar). We determined the word familiarity ratings statistically based on psychological experiments over 32 subjects. We selected about 30,000 words as the fundamental vocabulary, based on a minimum word familiarity threshold of 5. We also evaluated the vocabulary by comparing its word coverage with conventional intuition-based and corpus-based selection over dictionary definition sentences and novels, and demonstrated the superior coverage of our lexicon. Based on this, we conclude that the proposed method is superior to conventional methods for fundamental vocabulary selection.

  8. Improving randomness characterization through Bayesian model selection

    CERN Document Server

    R., Rafael Díaz-H; Martínez, Alí M Angulo; U'Ren, Alfred B; Hirsch, Jorge G; Marsili, Matteo; Castillo, Isaac Pérez

    2016-01-01

    Nowadays random number generation plays an essential role in technology with important applications in areas ranging from cryptography, which lies at the core of current communication protocols, to Monte Carlo methods, and other probabilistic algorithms. In this context, a crucial scientific endeavour is to develop effective methods that allow the characterization of random number generators. However, commonly employed methods either lack formality (e.g. the NIST test suite), or are inapplicable in principle (e.g. the characterization derived from the Algorithmic Theory of Information (ATI)). In this letter we present a novel method based on Bayesian model selection, which is both rigorous and effective, for characterizing randomness in a bit sequence. We derive analytic expressions for a model's likelihood which is then used to compute its posterior probability distribution. Our method proves to be more rigorous than NIST's suite and the Borel-Normality criterion and its implementation is straightforward. We...

  9. Quasar Selection Based on Photometric Variability

    CERN Document Server

    MacLeod, C L; Ivezic, Z; Kochanek, C S; Gibson, R; Meisner, A; Kozlowski, S; Sesar, B; Becker, A C; de Vries, W

    2010-01-01

    We develop a method for separating quasars from other variable point sources using SDSS Stripe 82 light curve data for ~10,000 variable objects. To statistically describe quasar variability, we use a damped random walk model parametrized by a damping time scale, tau, and an asymptotic amplitude (structure function), SF_inf. With the aid of an SDSS spectroscopically confirmed quasar sample, we demonstrate that variability selection in typical extragalactic fields with low stellar density can deliver complete samples with reasonable purity (or efficiency, E). Compared to a selection method based solely on the slope of the structure function, the inclusion of the tau information boosts E from 60% to 75% while maintaining a highly complete sample (98%) even in the absence of color information. For a completeness of C=90%, E is boosted from 80% to 85%. Conversely, C improves from 90% to 97% while maintaining E=80% when imposing a lower limit on tau. With the aid of color selection, the purity can be further booste...

  10. Model selection for amplitude analysis

    CERN Document Server

    Guegan, Baptiste; Stevens, Justin; Williams, Mike

    2015-01-01

    Model complexity in amplitude analyses is often a priori under-constrained since the underlying theory permits a large number of amplitudes to contribute to most physical processes. The use of an overly complex model results in reduced predictive power and worse resolution on unknown parameters of interest. Therefore, it is common to reduce the complexity by removing from consideration some subset of the allowed amplitudes. This paper studies a data-driven method for limiting model complexity through regularization during regression in the context of a multivariate (Dalitz-plot) analysis. The regularization technique applied greatly improves the performance. A method is also proposed for obtaining the significance of a resonance in a multivariate amplitude analysis.

  11. The Informed Guide to Climate Data Sets, a web-based community resource to facilitate the discussion and selection of appropriate datasets for Earth System Model Evaluation

    Science.gov (United States)

    Schneider, D. P.; Deser, C.; Shea, D.

    2011-12-01

    When comparing CMIP5 model output to observations, researchers will be faced with a bewildering array of choices. Considering just a few of the different products available for commonly analyzed climate variables, for reanalysis there are at least half a dozen different products, for sea ice concentrations there are NASA Team or Bootstrap versions, for sea surface temperatures there are HadISST or NOAA ERSST data, and for precipitation there are CMAP and GPCP data sets. While many data centers exist to host data, there is little centralized guidance on discovering and choosing appropriate climate data sets for the task at hand. Common strategies like googling "sea ice data" yield results that at best are substantially incomplete. Anecdotal evidence suggests that individual researchers often base their selections on non-scientific criteria-either the data are in a convenient format that the user is comfortable with, a co-worker has the data handy on her local server, or a mentor discourages or recommends the use of particular products for legacy or other non-objective reasons. Sometimes these casual recommendations are sound, but they are not accessible to the broader community or adequately captured in the peer-reviewed literature. These issues are addressed by the establishment of a web-based Informed Guide with the specific goals to (1) Evaluate and assess selected climate datasets and (2) Provide expert user guidance on the strengths and limitations of selected climate datasets. The Informed Guide is based at NCAR's Climate and Global Dynamics Division, Climate Analysis Section and is funded by NSF. The Informed Guide is an interactive website that welcomes participation from the broad scientific community and is scalable to grow as participation increases. In this presentation, we will present the website, discuss how you can participate, and address the broader issues about its role in the evaluation of CMIP5 and other climate model simulations. A link to the

  12. Polystyrene Based Silver Selective Electrodes

    Directory of Open Access Journals (Sweden)

    Shiva Agarwal

    2002-06-01

    Full Text Available Silver(I selective sensors have been fabricated from polystyrene matrix membranes containing macrocycle, Me6(14 diene.2HClO4 as ionophore. Best performance was exhibited by the membrane having a composition macrocycle : Polystyrene in the ratio 15:1. This membrane worked well over a wide concentration range 5.0×10-6–1.0×10-1M of Ag+ with a near-Nernstian slope of 53.0 ± 1.0 mV per decade of Ag+ activity. The response time of the sensor is <15 s and the membrane can be used over a period of four months with good reproducibility. The proposed electrode works well in a wide pH range 2.5-9.0 and demonstrates good discriminating power over a number of mono-, di-, and trivalent cations. The sensor has also been used as an indicator electrode in the potentiometric titration of silver(II ions against NaCl solution. The sensor can also be used in non-aqueous medium with no significant change in the value of slope or working concentration range for the estimation of Ag+ in solution having up to 25% (v/v nonaqueous fraction.

  13. A DFT and Semiempirical Model-Based Study of Opioid Receptor Affinity and Selectivity in a Group of Molecules with a Morphine Structural Core

    Directory of Open Access Journals (Sweden)

    Tamara Bruna-Larenas

    2012-01-01

    Full Text Available We report the results of a search for model-based relationships between mu, delta, and kappa opioid receptor binding affinity and molecular structure for a group of molecules having in common a morphine structural core. The wave functions and local reactivity indices were obtained at the ZINDO/1 and B3LYP/6-31 levels of theory for comparison. New developments in the expression for the drug-receptor interaction energy expression allowed several local atomic reactivity indices to be included, such as local electronic chemical potential, local hardness, and local electrophilicity. These indices, together with a new proposal for the ordering of the independent variables, were incorporated in the statistical study. We found and discussed several statistically significant relationships for mu, delta, and kappa opioid receptor binding affinity at both levels of theory. Some of the new local reactivity indices incorporated in the theory appear in several equations for the first time in the history of model-based equations. Interaction pharmacophores were generated for mu, delta, and kappa receptors. We discuss possible differences regulating binding and selectivity in opioid receptor subtypes. This study, contrarily to the statistically backed ones, is able to provide a microscopic insight of the mechanisms involved in the binding process.

  14. The Optimal Selection for Restricted Linear Models with Average Estimator

    Directory of Open Access Journals (Sweden)

    Qichang Xie

    2014-01-01

    Full Text Available The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing a k-class generalized information criterion (k-GIC, which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.

  15. RESEARCH ON NEGOTIATION-BASED PARTNER SELECTION APPROACH

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The key problem in the construction of virtual enterprises (VEs) is how to select appropriate partners. The negotiation-based approach is proposed to support partner selection in the construction of VEs . The negotiation model is discussed from three main aspects respectively, i.e., negotiation protocol, negotiation goal and negotiation decision-making model. And the generic mathematical description of the negotiation model is formally presented. Finally, a simple example is used to validate the approach's availability.

  16. Quantile hydrologic model selection and model structure deficiency assessment: 1. Theory

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies

  17. Quantile hydrologic model selection and model structure deficiency assessment: 1. Theory

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies structur

  18. Fuzzy modelling for selecting headgear types.

    Science.gov (United States)

    Akçam, M Okan; Takada, Kenji

    2002-02-01

    The purpose of this study was to develop a computer-assisted inference model for selecting appropriate types of headgear appliance for orthodontic patients and to investigate its clinical versatility as a decision-making aid for inexperienced clinicians. Fuzzy rule bases were created for degrees of overjet, overbite, and mandibular plane angle variables, respectively, according to subjective criteria based on the clinical experience and knowledge of the authors. The rules were then transformed into membership functions and the geometric mean aggregation was performed to develop the inference model. The resultant fuzzy logic was then tested on 85 cases in which the patients had been diagnosed as requiring headgear appliances. Eight experienced orthodontists judged each of the cases, and decided if they 'agreed', 'accepted', or 'disagreed' with the recommendations of the computer system. Intra-examiner agreements were investigated using repeated judgements of a set of 30 orthodontic cases and the kappa statistic. All of the examiners exceeded a kappa score of 0.7, allowing them to participate in the test run of the validity of the proposed inference model. The examiners' agreement with the system's recommendations was evaluated statistically. The average satisfaction rate of the examiners was 95.6 per cent and, for 83 out of the 85 cases, 97.6 per cent. The majority of the examiners (i.e. six or more out of the eight) were satisfied with the recommendations of the system. Thus, the usefulness of the proposed inference logic was confirmed.

  19. Rough set-based feature selection method

    Institute of Scientific and Technical Information of China (English)

    ZHAN Yanmei; ZENG Xiangyang; SUN Jincai

    2005-01-01

    A new feature selection method is proposed based on the discern matrix in rough set in this paper. The main idea of this method is that the most effective feature, if used for classification, can distinguish the most number of samples belonging to different classes. Experiments are performed using this method to select relevant features for artificial datasets and real-world datasets. Results show that the selection method proposed can correctly select all the relevant features of artificial datasets and drastically reduce the number of features at the same time. In addition, when this method is used for the selection of classification features of real-world underwater targets,the number of classification features after selection drops to 20% of the original feature set, and the classification accuracy increases about 6% using dataset after feature selection.

  20. Random Effect and Latent Variable Model Selection

    CERN Document Server

    Dunson, David B

    2008-01-01

    Presents various methods for accommodating model uncertainty in random effects and latent variable models. This book focuses on frequentist likelihood ratio and score tests for zero variance components. It also focuses on Bayesian methods for random effects selection in linear mixed effects and generalized linear mixed models

  1. The examination of quality of pregnancy care based on the World Health Organization's "Responsiveness" model of selected pregnant women in Tehran.

    Science.gov (United States)

    Marhamati, Tahereh; Torkzahrani, Shahnaz; Nasiri, Malihe; Lotfi, Razieh

    2017-02-01

    The World Health Organization (WHO) Responsiveness model showing the ability of health systems in fulfilling people's expectations in connection with nonclinical aspects is an appropriate pattern to assess healthcare. The purpose of this study was to determine the status of pregnancy care provisions based on the responsiveness model. This was a cross-sectional study conducted by randomly sampling 130 women visiting selected hospitals in Tehran in 2015. A researcher-made questionnaire based on the responsiveness model of WHO was used to collect data. We determined the face validity and content validity of the questionnaire, and its reliability was confirmed by Cronbach's alpha coefficient (0.94) and test-retest analysis (0.96). The obtained data were analyzed by SPSS version 20 descriptive statistics, t-test, one-way ANOVA, Pearson product-moment correlation coefficient, and Spearman correlation. Total responsiveness from the perspective of service recipients was 69.46±14.65 from 100. The obtained scores showed that, in the range of 0 to 100, 73.02 were about basic amenities (the most score), 72.93 about dignity, 70.91 about communication, 70.76 about confidentiality, 66.30 about provision social needs, 65.96 about choice of provider, 65.92 about autonomy, and 52.65 about prompt attention (the lowest score), which are representing the average level of service quality. There were significant relationships between participating in preparation class of labor and dignity (presponsiveness (p=0.03). It was obtained that there is a significant linear relationship between scores given to hospitals and dimensions of responsiveness (p=0.05). Findings indicated a significant relationship between insurance type and dimensions of choice of provider (p=0.03) and communication (p=0.03). The mean score of service quality in the present investigation illustrated that nonclinical dimensions have been disregarded and it has potential to be better. So some grand plans are needed.

  2. GEOGRAPHIC INFORMATION SYSTEM-BASED MODELING AND ANALYSIS FOR SITE SELECTION OF GREEN MUSSEL, Perna viridis, MARICULTURE IN LADA BAY, PANDEGLANG, BANTEN PROVINCE

    Directory of Open Access Journals (Sweden)

    I Nyoman Radiarta

    2011-06-01

    Full Text Available Green mussel is one of important species cultured in Lada Bay, Pandeglang. To provide a necessary guidance regarding green mussel mariculture development, finding suitable site is an important step. This study was conducted to identify suitable site for green mussel mariculture development using geographic information system (GIS based models. Seven important parameters were grouped into two submodels, namely environmental (water temperature, salinity, suspended solid, dissolve oxygen, and bathymetry and infrastructural (distance to settlement and pond aquaculture. A constraint data was used to exclude the area from suitability maps that cannot be allowed to develop green mussel mariculture, including area of floating net fishing activity and area near electricity station. Analyses of factors and constraints indicated that about 31% of potential area with bottom depth less than 25 m had the most suitable area. This area was shown to have an ideal condition for green mussel mariculture in this study region. This study shows that GIS model is a powerful tool for site selection decision making. The tool can be a valuable tool in solving problems in local, regional, and/or continent areas.

  3. Research and Application of Hybrid Forecasting Model Based on an Optimal Feature Selection System—A Case Study on Electrical Load Forecasting

    Directory of Open Access Journals (Sweden)

    Yunxuan Dong

    2017-04-01

    Full Text Available The process of modernizing smart grid prominently increases the complexity and uncertainty in scheduling and operation of power systems, and, in order to develop a more reliable, flexible, efficient and resilient grid, electrical load forecasting is not only an important key but is still a difficult and challenging task as well. In this paper, a short-term electrical load forecasting model, with a unit for feature learning named Pyramid System and recurrent neural networks, has been developed and it can effectively promote the stability and security of the power grid. Nine types of methods for feature learning are compared in this work to select the best one for learning target, and two criteria have been employed to evaluate the accuracy of the prediction intervals. Furthermore, an electrical load forecasting method based on recurrent neural networks has been formed to achieve the relational diagram of historical data, and, to be specific, the proposed techniques are applied to electrical load forecasting using the data collected from New South Wales, Australia. The simulation results show that the proposed hybrid models can not only satisfactorily approximate the actual value but they are also able to be effective tools in the planning of smart grids.

  4. A Decision Model for Selecting Participants in Supply Chain

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In order to satisfy the rapid changing requirements of customers, enterprises must cooperate with each other to form supply chain. The first and the most important stage in the forming of supply chain is the selection of participants. The article proposes a two-staged decision model to select partners. The first stage is the inter company comparison in each business process to select highefficiency candidate based on inside variables. The next stage is to analyse the combination of different candidates in order to select the most perfect partners according to a goal-programming model.

  5. Model-based analysis and design of nerve cuff electrodes for restoring bladder function by selective stimulation of the pudendal nerve

    Science.gov (United States)

    Kent, Alexander R.; Grill, Warren M.

    2013-06-01

    Objective. Electrical stimulation of the pudendal nerve (PN) is being developed as a means to restore bladder function in persons with spinal cord injury. A single nerve cuff electrode placed on the proximal PN trunk may enable selective stimulation of distinct fascicles to maintain continence or evoke micturition. The objective of this study was to design a nerve cuff that enabled selective stimulation of the PN. Approach. We evaluated the performance of both flat interface nerve electrode (FINE) cuff and round cuff designs, with a range of FINE cuff heights and number of contacts, as well as multiple contact orientations. This analysis was performed using a computational model, in which the nerve and fascicle cross-sectional positions from five human PN trunks were systematically reshaped within the nerve cuff. These cross-sections were used to create finite element models, with electric potentials calculated and applied to a cable model of a myelinated axon to evaluate stimulation selectivity for different PN targets. Subsequently, the model was coupled to a genetic algorithm (GA) to identify solutions that used multiple contact activation to maximize selectivity and minimize total stimulation voltage. Main results. Simulations did not identify any significant differences in selectivity between FINE and round cuffs, although the latter required smaller stimulation voltages for target activation due to preserved localization of targeted fascicle groups. Further, it was found that a ten contact nerve cuff generated sufficient selectivity for all PN targets, with the degree of selectivity dependent on the relative position of the target within the nerve. The GA identified solutions that increased fitness by 0.7-45.5% over single contact activation by decreasing stimulation of non-targeted fascicles. Significance. This study suggests that using an optimal nerve cuff design and multiple contact activation could enable selective stimulation of the human PN trunk for

  6. Hidden Markov Model for Stock Selection

    Directory of Open Access Journals (Sweden)

    Nguyet Nguyen

    2015-10-01

    Full Text Available The hidden Markov model (HMM is typically used to predict the hidden regimes of observation data. Therefore, this model finds applications in many different areas, such as speech recognition systems, computational molecular biology and financial market predictions. In this paper, we use HMM for stock selection. We first use HMM to make monthly regime predictions for the four macroeconomic variables: inflation (consumer price index (CPI, industrial production index (INDPRO, stock market index (S&P 500 and market volatility (VIX. At the end of each month, we calibrate HMM’s parameters for each of these economic variables and predict its regimes for the next month. We then look back into historical data to find the time periods for which the four variables had similar regimes with the forecasted regimes. Within those similar periods, we analyze all of the S&P 500 stocks to identify which stock characteristics have been well rewarded during the time periods and assign scores and corresponding weights for each of the stock characteristics. A composite score of each stock is calculated based on the scores and weights of its features. Based on this algorithm, we choose the 50 top ranking stocks to buy. We compare the performances of the portfolio with the benchmark index, S&P 500. With an initial investment of $100 in December 1999, over 15 years, in December 2014, our portfolio had an average gain per annum of 14.9% versus 2.3% for the S&P 500.

  7. Efficient Model-Based Exploration

    NARCIS (Netherlands)

    Wiering, M.A.; Schmidhuber, J.

    1998-01-01

    Model-Based Reinforcement Learning (MBRL) can greatly profit from using world models for estimating the consequences of selecting particular actions: an animat can construct such a model from its experiences and use it for computing rewarding behavior. We study the problem of collecting useful exper

  8. Development of SPAWM: selection program for available watershed models.

    Science.gov (United States)

    Cho, Yongdeok; Roesner, Larry A

    2014-01-01

    A selection program for available watershed models (also known as SPAWM) was developed. Thirty-three commonly used watershed models were analyzed in depth and classified in accordance to their attributes. These attributes consist of: (1) land use; (2) event or continuous; (3) time steps; (4) water quality; (5) distributed or lumped; (6) subsurface; (7) overland sediment; and (8) best management practices. Each of these attributes was further classified into sub-attributes. Based on user selected sub-attributes, the most appropriate watershed model is selected from the library of watershed models. SPAWM is implemented using Excel Visual Basic and is designed for use by novices as well as by experts on watershed modeling. It ensures that the necessary sub-attributes required by the user are captured and made available in the selected watershed model.

  9. Parametric or nonparametric? A parametricness index for model selection

    CERN Document Server

    Liu, Wei; 10.1214/11-AOS899

    2012-01-01

    In model selection literature, two classes of criteria perform well asymptotically in different situations: Bayesian information criterion (BIC) (as a representative) is consistent in selection when the true model is finite dimensional (parametric scenario); Akaike's information criterion (AIC) performs well in an asymptotic efficiency when the true model is infinite dimensional (nonparametric scenario). But there is little work that addresses if it is possible and how to detect the situation that a specific model selection problem is in. In this work, we differentiate the two scenarios theoretically under some conditions. We develop a measure, parametricness index (PI), to assess whether a model selected by a potentially consistent procedure can be practically treated as the true model, which also hints on AIC or BIC is better suited for the data for the goal of estimating the regression function. A consequence is that by switching between AIC and BIC based on the PI, the resulting regression estimator is si...

  10. Information-theoretic model selection applied to supernovae data

    CERN Document Server

    Biesiada, M

    2007-01-01

    There are several different theoretical ideas invoked to explain the dark energy with relatively little guidance of which one of them might be right. Therefore the emphasis of ongoing and forthcoming research in this field shifts from estimating specific parameters of cosmological model to the model selection. In this paper we apply information-theoretic model selection approach based on Akaike criterion as an estimator of Kullback-Leibler entropy. In particular, we present the proper way of ranking the competing models based on Akaike weights (in Bayesian language - posterior probabilities of the models). Out of many particular models of dark energy we focus on four: quintessence, quintessence with time varying equation of state, brane-world and generalized Chaplygin gas model and test them on Riess' Gold sample. As a result we obtain that the best model - in terms of Akaike Criterion - is the quintessence model. The odds suggest that although there exist differences in the support given to specific scenario...

  11. State-based surveillance for selected hemoglobinopathies.

    Science.gov (United States)

    Hulihan, Mary M; Feuchtbaum, Lisa; Jordan, Lanetta; Kirby, Russell S; Snyder, Angela; Young, William; Greene, Yvonne; Telfair, Joseph; Wang, Ying; Cramer, William; Werner, Ellen M; Kenney, Kristy; Creary, Melissa; Grant, Althea M

    2015-02-01

    The lack of an ongoing surveillance system for hemoglobinopathies in the United States impedes the ability of public health organizations to identify individuals with these conditions, monitor their health-care utilization and clinical outcomes, and understand the effect these conditions have on the health-care system. This article describes the results of a pilot program that supported the development of the infrastructure and data collection methods for a state-based surveillance system for selected hemoglobinopathies. The system was designed to identify and gather information on all people living with a hemoglobinopathy diagnosis (sickle cell diseases or thalassemias) in the participating states during 2004-2008. Novel, three-level case definitions were developed, and multiple data sets were used to collect information. In total, 31,144 individuals who had a hemoglobinopathy diagnosis during the study period were identified in California; 39,633 in Florida; 20,815 in Georgia; 12,680 in Michigan; 34,853 in New York, and 8,696 in North Carolina. This approach provides a possible model for the development of state-based hemoglobinopathy surveillance systems.

  12. APPLICATION OF THE MODEL CERNE FOR THE ESTABLISHMENT OF CRITERIA INCUBATION SELECTION IN TECHNOLOGY BASED BUSINESSES : A STUDY IN INCUBATORS OF TECHNOLOGICAL BASE OF THE COUNTRY

    National Research Council Canada - National Science Library

    Clobert Jefferson Passoni; Izabel Cristina Zattar; Jessica Werner Boschetto; Rosangela Rosa Luciane da Silva

    2017-01-01

    .... Because of the different forms of management of incubators, the business model CERNE - Reference Center for Support for New Projects - was created by Anprotec and Sebrae, in order to standardize...

  13. Expert System Model for Educational Personnel Selection

    Directory of Open Access Journals (Sweden)

    Héctor A. Tabares-Ospina

    2013-06-01

    Full Text Available The staff selection is a difficult task due to the subjectivity that the evaluation means. This process can be complemented using a system to support decision. This paper presents the implementation of an expert system to systematize the selection process of professors. The management of software development is divided into 4 parts: requirements, design, implementation and commissioning. The proposed system models a specific knowledge through relationships between variables evidence and objective.

  14. Boosting model performance and interpretation by entangling preprocessing selection and variable selection.

    Science.gov (United States)

    Gerretzen, Jan; Szymańska, Ewa; Bart, Jacob; Davies, Antony N; van Manen, Henk-Jan; van den Heuvel, Edwin R; Jansen, Jeroen J; Buydens, Lutgarde M C

    2016-09-28

    The aim of data preprocessing is to remove data artifacts-such as a baseline, scatter effects or noise-and to enhance the contextually relevant information. Many preprocessing methods exist to deliver one or more of these benefits, but which method or combination of methods should be used for the specific data being analyzed is difficult to select. Recently, we have shown that a preprocessing selection approach based on Design of Experiments (DoE) enables correct selection of highly appropriate preprocessing strategies within reasonable time frames. In that approach, the focus was solely on improving the predictive performance of the chemometric model. This is, however, only one of the two relevant criteria in modeling: interpretation of the model results can be just as important. Variable selection is often used to achieve such interpretation. Data artifacts, however, may hamper proper variable selection by masking the true relevant variables. The choice of preprocessing therefore has a huge impact on the outcome of variable selection methods and may thus hamper an objective interpretation of the final model. To enhance such objective interpretation, we here integrate variable selection into the preprocessing selection approach that is based on DoE. We show that the entanglement of preprocessing selection and variable selection not only improves the interpretation, but also the predictive performance of the model. This is achieved by analyzing several experimental data sets of which the true relevant variables are available as prior knowledge. We show that a selection of variables is provided that complies more with the true informative variables compared to individual optimization of both model aspects. Importantly, the approach presented in this work is generic. Different types of models (e.g. PCR, PLS, …) can be incorporated into it, as well as different variable selection methods and different preprocessing methods, according to the taste and experience of

  15. ERP system selection approach based on improved BOCR and FANP model%基于改进的BOCR和FANP模型的ERP系统选择方法

    Institute of Scientific and Technical Information of China (English)

    周晓光; 吕波; 朱蓉

    2012-01-01

    ERP系统能够明显影响企业未来的竞争力和企业绩效.ERP系统的选择是一个典型的多因素决策问题.基于BOCR理论建立了ERP系统选择的评价指标体系,考虑了准则/指标之间的相互影响和反馈关系.由于评价过程中信息的不精确和模糊性,用三角模糊数表示专家或决策者的偏好意见,根据模糊优先规划方法计算三角模糊判断矩阵的局部权重.依据准则/指标之间的网络结构关系建立了未加权超矩阵,计算了收敛后的极限超矩阵,以得出各指标的综合权重.最后以案例说明如何应用提出的方法.%ERP system can significantly affect the competitiveness and performance of an enterprise in the future. The problem of selecting an optimal ERP system is a multi-attribute decision making problem. Taking account into the interaction and feedback relationships between criteria/index,this paper established an evaluation index system for selecting an optimal ERP system based on improved BOCR model. Due to the uncertainty and vagueness during the process of evaluation,used triangular fuzzy numbers to denote experts' or decision-makers' preference opinions. Calculated local priority for triangular fuzzy comparison matrix by the method of fuzzy preference programming. Built an unweighted supermatrix based on the network structure of criteria/index, and figured out the limited supermatrix, so gained the comprehensive priority of each index. Finally, gave a case to illustrate the proposed method.

  16. Bayesian model evidence for order selection and correlation testing.

    Science.gov (United States)

    Johnston, Leigh A; Mareels, Iven M Y; Egan, Gary F

    2011-01-01

    Model selection is a critical component of data analysis procedures, and is particularly difficult for small numbers of observations such as is typical of functional MRI datasets. In this paper we derive two Bayesian evidence-based model selection procedures that exploit the existence of an analytic form for the linear Gaussian model class. Firstly, an evidence information criterion is proposed as a model order selection procedure for auto-regressive models, outperforming the commonly employed Akaike and Bayesian information criteria in simulated data. Secondly, an evidence-based method for testing change in linear correlation between datasets is proposed, which is demonstrated to outperform both the traditional statistical test of the null hypothesis of no correlation change and the likelihood ratio test.

  17. An integrated model for supplier selection process

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In today's highly competitive manufacturing environment, the supplier selection process becomes one of crucial activities in supply chain management. In order to select the best supplier(s) it is not only necessary to continuously tracking and benchmarking performance of suppliers but also to make a tradeoff between tangible and intangible factors some of which may conflict. In this paper an integration of case-based reasoning (CBR), analytical network process (ANP) and linear programming (LP) is proposed to solve the supplier selection problem.

  18. Bayesian variable selection for latent class models.

    Science.gov (United States)

    Ghosh, Joyee; Herring, Amy H; Siega-Riz, Anna Maria

    2011-09-01

    In this article, we develop a latent class model with class probabilities that depend on subject-specific covariates. One of our major goals is to identify important predictors of latent classes. We consider methodology that allows estimation of latent classes while allowing for variable selection uncertainty. We propose a Bayesian variable selection approach and implement a stochastic search Gibbs sampler for posterior computation to obtain model-averaged estimates of quantities of interest such as marginal inclusion probabilities of predictors. Our methods are illustrated through simulation studies and application to data on weight gain during pregnancy, where it is of interest to identify important predictors of latent weight gain classes.

  19. GATE TYPE SELECTION BASED ON FUZZY MAPPING

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Gate type selection is very important for mould design. Improper gate type may lead to poor product quality and low production efficiency. Although numerical simulation approach could be used to optimize gate location, the determination of gate type is still up to designers' experience. A novel method for selecting gate type based on fuzzy logic is proposed. The proposed methodology follows three steps:Design requirements for gate is extracted and generalized; Possible gate types (design schemes) are presented; The fuzzy mapping relationship between gate design requirements and gate design scheme is established based on fuzzy composition and fuzzy relation transition matrices that are assigned by domain experts.

  20. Enhancing selective capacity through venture bases

    DEFF Research Database (Denmark)

    Vintergaard, Christian; Husted, Kenneth

    2003-01-01

    , the few successful investments carry the costs of many moreinvestment decisions. It would obviously be attractive to improve the ability to `pick thewinners'. In this paper, we develop a conceptual framework for understanding how firms`involvement in establishing and nurturing the venture base (the idea......Corporate venturing managers have the rule of thumb that only approximately one out often investments really pay of in financial measures. These low odds for success, of course,put extremely high expectations to the profit yielded from the few investments that becomesuccessful. In other words...... creation phase)enhances their ability to select ventures.Keywords: Corporate venturing, venture base, selection, network....

  1. Personnel Selection Based on Fuzzy Methods

    Directory of Open Access Journals (Sweden)

    Lourdes Cañós

    2011-03-01

    Full Text Available The decisions of managers regarding the selection of staff strongly determine the success of the company. A correct choice of employees is a source of competitive advantage. We propose a fuzzy method for staff selection, based on competence management and the comparison with the valuation that the company considers the best in each competence (ideal candidate. Our method is based on the Hamming distance and a Matching Level Index. The algorithms, implemented in the software StaffDesigner, allow us to rank the candidates, even when the competences of the ideal candidate have been evaluated only in part. Our approach is applied in a numerical example.

  2. IT vendor selection model by using structural equation model & analytical hierarchy process

    Science.gov (United States)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  3. Digital curation: a proposal of a semi-automatic digital object selection-based model for digital curation in Big Data environments

    Directory of Open Access Journals (Sweden)

    Moisés Lima Dutra

    2016-08-01

    Full Text Available Introduction: This work presents a new approach for Digital Curations from a Big Data perspective. Objective: The objective is to propose techniques to digital curations for selecting and evaluating digital objects that take into account volume, velocity, variety, reality, and the value of the data collected from multiple knowledge domains. Methodology: This is an exploratory research of applied nature, which addresses the research problem in a qualitative way. Heuristics allow this semi-automatic process to be done either by human curators or by software agents. Results: As a result, it was proposed a model for searching, processing, evaluating and selecting digital objects to be processed by digital curations. Conclusions: It is possible to use Big Data environments as a source of information resources for Digital Curation; besides, Big Data techniques and tools can support the search and selection process of information resources by Digital Curations.

  4. Adaptive Covariance Estimation with model selection

    CERN Document Server

    Biscay, Rolando; Loubes, Jean-Michel

    2012-01-01

    We provide in this paper a fully adaptive penalized procedure to select a covariance among a collection of models observing i.i.d replications of the process at fixed observation points. For this we generalize previous results of Bigot and al. and propose to use a data driven penalty to obtain an oracle inequality for the estimator. We prove that this method is an extension to the matricial regression model of the work by Baraud.

  5. A Theoretical Model for Selective Exposure Research.

    Science.gov (United States)

    Roloff, Michael E.; Noland, Mark

    This study tests the basic assumptions underlying Fishbein's Model of Attitudes by correlating an individual's selective exposure to types of television programs (situation comedies, family drama, and action/adventure) with the attitudinal similarity between individual attitudes and attitudes characterized on the programs. Twenty-three college…

  6. Source-Based Modeling Of Urban Stormwater Quality Response to the Selected Scenarios Combining Future Changes in Climate and Socio-Economic Factors

    Science.gov (United States)

    Borris, Matthias; Leonhardt, Günther; Marsalek, Jiri; Österlund, Heléne; Viklander, Maria

    2016-08-01

    The assessment of future trends in urban stormwater quality should be most helpful for ensuring the effectiveness of the existing stormwater quality infrastructure in the future and mitigating the associated impacts on receiving waters. Combined effects of expected changes in climate and socio-economic factors on stormwater quality were examined in two urban test catchments by applying a source-based computer model (WinSLAMM) for TSS and three heavy metals (copper, lead, and zinc) for various future scenarios. Generally, both catchments showed similar responses to the future scenarios and pollutant loads were generally more sensitive to changes in socio-economic factors (i.e., increasing traffic intensities, growth and intensification of the individual land-uses) than in the climate. Specifically, for the selected Intermediate socio-economic scenario and two climate change scenarios (RSP = 2.6 and 8.5), the TSS loads from both catchments increased by about 10 % on average, but when applying the Intermediate climate change scenario (RCP = 4.5) for two SSPs, the Sustainability and Security scenarios (SSP1 and SSP3), the TSS loads increased on average by 70 %. Furthermore, it was observed that well-designed and maintained stormwater treatment facilities targeting local pollution hotspots exhibited the potential to significantly improve stormwater quality, however, at potentially high costs. In fact, it was possible to reduce pollutant loads from both catchments under the future Sustainability scenario (on average, e.g., TSS were reduced by 20 %), compared to the current conditions. The methodology developed in this study was found useful for planning climate change adaptation strategies in the context of local conditions.

  7. Role of melt behavior in modifying oxidation distribution using an interface incorporated model in selective laser melting of aluminum-based material

    Science.gov (United States)

    Gu, Dongdong; Dai, Donghua

    2016-08-01

    A transient three dimensional model for describing the molten pool dynamics and the response of oxidation film evolution in the selective laser melting of aluminum-based material is proposed. The physical difference in both sides of the scan track, powder-solid transformation and temperature dependent physical properties are taken into account. It shows that the heat energy tends to accumulate in the powder material rather than in the as-fabricated part, leading to the formation of the asymmetrical patterns of the temperature contour and the attendant larger dimensions of the molten pool in the powder phase. As a higher volumetric energy density is applied (≥1300 J/mm3), a severe evaporation is produced with the upward direction of velocity vector in the irradiated powder region while a restricted operating temperature is obtained in the as-fabricated part. The velocity vector continuously changes from upward direction to the downward one as the scan speed increases from 100 mm/s to 300 mm/s, promoting the generation of the debris of the oxidation films and the resultant homogeneous distribution state in the matrix. For the applied hatch spacing of 50 μm, a restricted remelting phenomenon of the as-fabricated part is produced with the upward direction of the convection flow, significantly reducing the turbulence of the thermal-capillary convection on the breaking of the oxidation films, and therefore, the connected oxidation films through the neighboring layers are typically formed. The morphology and distribution of the oxidation are experimentally acquired, which are in a good agreement with the results predicted by simulation.

  8. Selective recovery of dissolved Fe, Al, Cu, and Zn in acid mine drainage based on modeling to predict precipitation pH.

    Science.gov (United States)

    Park, Sang-Min; Yoo, Jong-Chan; Ji, Sang-Woo; Yang, Jung-Seok; Baek, Kitae

    2015-02-01

    Mining activities have caused serious environmental problems including acid mine drainage (AMD), the dispersion of mine tailings and dust, and extensive mine waste. In particular, AMD contaminates soil and water downstream of mines and generally contains mainly valuable metals such as Cu, Zn, and Ni as well as Fe and Al. In this study, we investigated the selective recovery of Fe, Al, Cu, Zn, and Ni from AMD. First, the speciation of Fe, Al, Cu, Zn, and Ni as a function of the equilibrium solution pH was simulated by Visual MINTEQ. Based on the simulation results, the predicted pHs for the selective precipitation of Fe, Al, Cu, and Zn/Ni were determined. And recovery yield of metals using simulation is over 99 %. Experiments using artificial AMD based on the simulation results confirmed the selective recovery of Fe, Al, Cu, and Zn/Ni, and the recovery yields of Fe/Al/Cu/Zn and Fe/Al/Cu/Ni mixtures using Na2CO3 were 99.6/86.8/71.9/77.0 % and 99.2/85.7/73.3/86.1 %, respectively. After then, the simulation results were applied to an actual AMD for the selective recovery of metals, and the recovery yields of Fe, Al, Cu, and Zn using NaOH were 97.2, 74.9, 66.9, and 89.7 %, respectively. Based on the results, it was concluded that selective recovery of dissolved metals from AMD is possible by adjusting the solution pH using NaOH or Na2CO3 as neutralizing agents.

  9. Feature subset selection based on relevance

    Science.gov (United States)

    Wang, Hui; Bell, David; Murtagh, Fionn

    In this paper an axiomatic characterisation of feature subset selection is presented. Two axioms are presented: sufficiency axiom—preservation of learning information, and necessity axiom—minimising encoding length. The sufficiency axiom concerns the existing dataset and is derived based on the following understanding: any selected feature subset should be able to describe the training dataset without losing information, i.e. it is consistent with the training dataset. The necessity axiom concerns the predictability and is derived from Occam's razor, which states that the simplest among different alternatives is preferred for prediction. The two axioms are then restated in terms of relevance in a concise form: maximising both the r( X; Y) and r( Y; X) relevance. Based on the relevance characterisation, four feature subset selection algorithms are presented and analysed: one is exhaustive and the remaining three are heuristic. Experimentation is also presented and the results are encouraging. Comparison is also made with some well-known feature subset selection algorithms, in particular, with the built-in feature selection mechanism in C4.5.

  10. Partner Selection Optimization Model of Agricultural Enterprises in Supply Chain

    Directory of Open Access Journals (Sweden)

    Feipeng Guo

    2013-10-01

    Full Text Available With more and more importance of correctly selecting partners in supply chain of agricultural enterprises, a large number of partner evaluation techniques are widely used in the field of agricultural science research. This study established a partner selection model to optimize the issue of agricultural supply chain partner selection. Firstly, it constructed a comprehensive evaluation index system after analyzing the real characteristics of agricultural supply chain. Secondly, a heuristic method for attributes reduction based on rough set theory and principal component analysis was proposed which can reduce multiple attributes into some principal components, yet retaining effective evaluation information. Finally, it used improved BP neural network which has self-learning function to select partners. The empirical analysis on an agricultural enterprise shows that this model is effective and feasible for practical partner selection.

  11. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  12. Feature Selection for Neural Network Based Stock Prediction

    Science.gov (United States)

    Sugunnasil, Prompong; Somhom, Samerkae

    We propose a new methodology of feature selection for stock movement prediction. The methodology is based upon finding those features which minimize the correlation relation function. We first produce all the combination of feature and evaluate each of them by using our evaluate function. We search through the generated set with hill climbing approach. The self-organizing map based stock prediction model is utilized as the prediction method. We conduct the experiment on data sets of the Microsoft Corporation, General Electric Co. and Ford Motor Co. The results show that our feature selection method can improve the efficiency of the neural network based stock prediction.

  13. A Neurodynamical Model for Selective Visual Attention

    Institute of Scientific and Technical Information of China (English)

    QU Jing-Yi; WANG Ru-Bin; ZHANG Yuan; DU Ying

    2011-01-01

    A neurodynamical model for selective visual attention considering orientation preference is proposed. Since orientation preference is one of the most important properties of neurons in the primary visual cortex, it should be fully considered besides external stimuli intensity. By tuning the parameter of orientation preference, the regimes of synchronous dynamics associated with the development of the attention focus are studied. The attention focus is represented by those peripheral neurons that generate spikes synchronously with the central neuron while the activity of other peripheral neurons is suppressed. Such dynamics correspond to the partial synchronization mode. Simulation results show that the model can sequentially select objects with different orientation preferences and has a reliable shift of attention from one object to another, which are consistent with the experimental results that neurons with different orientation preferences are laid out in pinwheel patterns.%A neurodynamical model for selective visual attention considering orientation preference is proposed.Since orientation preference is one of the most important properties of neurons in the primary visual cortex,it should be fully considered besides external stimuli intensity.By tuning the parameter of orientation preference,the regimes of synchronous dynamics associated with the development of the attention focus are studied.The attention focus is represented by those peripheral neurons that generate spikes synchronously with the central neuron while the activity of other peripheral neurons is suppressed.Such dynamics correspond to the partial synchronization mode.Simulation results show that the model can sequentially select objects with different orientation preferences and has a reliable shift of attention from one object to another,which are consistent with the experimental results that neurons with different orientation preferences are laid out in pinwheel patterns.Selective visual

  14. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  15. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: 'Are we actually dealing with a convolutive mixture?'. We try to answer this question for EEG data....

  16. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  17. Utility-based criteria for selecting patients with hepatocellular carcinoma for liver transplantation: A multicenter cohort study using the alpha-fetoprotein model as a survival predictor.

    Science.gov (United States)

    Vitale, Alessandro; Farinati, Fabio; Burra, Patrizia; Trevisani, Franco; Giannini, Edoardo G; Ciccarese, Francesca; Piscaglia, Fabio; Rapaccini, Gian Lodovico; Di Marco, Mariella; Caturelli, Eugenio; Zoli, Marco; Borzio, Franco; Cabibbo, Giuseppe; Felder, Martina; Sacco, Rodolfo; Morisco, Filomena; Missale, Gabriele; Foschi, Francesco Giuseppe; Gasbarrini, Antonio; Svegliati Baroni, Gianluca; Virdone, Roberto; Chiaramonte, Maria; Spolverato, Gaya; Cillo, Umberto

    2015-10-01

    The lifetime utility of liver transplantation (LT) in patients with hepatocellular carcinoma (HCC) is still controversial. The aim of this study was to ascertain when LT is cost-effective for HCC patients, with a view to proposing new transplant selection criteria. The study involved a real cohort of potentially transplantable Italian HCC patients (n = 2419 selected from the Italian Liver Cancer group database) who received nontransplant therapies. A non-LT survival analysis was conducted, the direct costs of therapies were calculated, and a Markov model was used to compute the cost utility of LT over non-LT therapies in Italian and US cost scenarios. Post-LT survival was calculated using the alpha-fetoprotein (AFP) model on the basis of AFP values and radiological size and number of nodules. The primary endpoint was the net health benefit (NHB), defined as LT survival benefit in quality-adjusted life years minus incremental costs (US $)/willingness to pay. The calculated median cost of non-LT therapies per patient was US $53,042 in Italy and US $62,827 in the United States. On Monte Carlo simulation, the NHB of LT was always positive for AFP model values ≤ 3 and always negative for values > 7 in both countries. A multivariate model showed that nontumor variables (patient's age, Child-Turcotte-Pugh [CTP] class, and alternative therapies) had the potential to shift the AFP model threshold of LT cost-ineffectiveness from 3 to 7. LT proved always cost-effective for HCC patients with AFP model values ≤ 3, whereas the cost-ineffectiveness threshold ranged between 3 and 7 using nontumor variables.

  18. New insights in portfolio selection modeling

    OpenAIRE

    Zareei, Abalfazl

    2016-01-01

    Recent advancements in the field of network theory commence a new line of developments in portfolio selection techniques that stands on the ground of perceiving financial market as a network with assets as nodes and links accounting for various types of relationships among financial assets. In the first chapter, we model the shock propagation mechanism among assets via network theory and provide an approach to construct well-diversified portfolios that are resilient to shock propagation and c...

  19. Fuzzy MCDM Model for Risk Factor Selection in Construction Projects

    Directory of Open Access Journals (Sweden)

    Pejman Rezakhani

    2012-11-01

    Full Text Available Risk factor selection is an important step in a successful risk management plan. There are many risk factors in a construction project and by an effective and systematic risk selection process the most critical risks can be distinguished to have more attention. In this paper through a comprehensive literature survey, most significant risk factors in a construction project are classified in a hierarchical structure. For an effective risk factor selection, a modified rational multi criteria decision making model (MCDM is developed. This model is a consensus rule based model and has the optimization property of rational models. By applying fuzzy logic to this model, uncertainty factors in group decision making such as experts` influence weights, their preference and judgment for risk selection criteria will be assessed. Also an intelligent checking process to check the logical consistency of experts` preferences will be implemented during the decision making process. The solution inferred from this method is in the highest degree of acceptance of group members. Also consistency of individual preferences is checked by some inference rules. This is an efficient and effective approach to prioritize and select risks based on decisions made by group of experts in construction projects. The applicability of presented method is assessed through a case study.

  20. Multicriteria framework for selecting a process modelling language

    Science.gov (United States)

    Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel

    2016-01-01

    The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.

  1. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  2. Bayesian model selection in Gaussian regression

    CERN Document Server

    Abramovich, Felix

    2009-01-01

    We consider a Bayesian approach to model selection in Gaussian linear regression, where the number of predictors might be much larger than the number of observations. From a frequentist view, the proposed procedure results in the penalized least squares estimation with a complexity penalty associated with a prior on the model size. We investigate the optimality properties of the resulting estimator. We establish the oracle inequality and specify conditions on the prior that imply its asymptotic minimaxity within a wide range of sparse and dense settings for "nearly-orthogonal" and "multicollinear" designs.

  3. Model Selection in Data Analysis Competitions

    DEFF Research Database (Denmark)

    Wind, David Kofoed; Winther, Ole

    2014-01-01

    The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platform...... Kaggle. In this paper, we will state and try to verify a set of qualitative hypotheses about predictive modelling, both in general and in the scope of data analysis competitions. To verify our hypotheses we will look at previous competitions and their outcomes, use qualitative interviews with top...

  4. Transmit antenna selection based on shadowing side information

    KAUST Repository

    Yilmaz, Ferkan

    2011-05-01

    In this paper, we propose a new transmit antenna selection scheme based on shadowing side information. In the proposed scheme, single transmit antenna which has the highest shadowing coefficient is selected. By the proposed technique, usage of the feedback channel and channel estimation complexity at the receiver can be reduced. We consider independent but not identically distributed Generalized-K composite fading model, which is a general composite fading & shadowing channel model for wireless environments. Exact closed-form outage probability, moment generating function and symbol error probability expressions are derived. In addition, theoretical performance results are validated by Monte Carlo simulations. © 2011 IEEE.

  5. A method for selecting training samples based on camera response

    Science.gov (United States)

    Zhang, Leihong; Li, Bei; Pan, Zilan; Liang, Dong; Kang, Yi; Zhang, Dawei; Ma, Xiuhua

    2016-09-01

    In the process of spectral reflectance reconstruction, sample selection plays an important role in the accuracy of the constructed model and in reconstruction effects. In this paper, a method for training sample selection based on camera response is proposed. It has been proved that the camera response value has a close correlation with the spectral reflectance. Consequently, in this paper we adopt the technique of drawing a sphere in camera response value space to select the training samples which have a higher correlation with the test samples. In addition, the Wiener estimation method is used to reconstruct the spectral reflectance. Finally, we find that the method of sample selection based on camera response value has the smallest color difference and root mean square error after reconstruction compared to the method using the full set of Munsell color charts, the Mohammadi training sample selection method, and the stratified sampling method. Moreover, the goodness of fit coefficient of this method is also the highest among the four sample selection methods. Taking all the factors mentioned above into consideration, the method of training sample selection based on camera response value enhances the reconstruction accuracy from both the colorimetric and spectral perspectives.

  6. Testing exclusion restrictions and additive separability in sample selection models

    DEFF Research Database (Denmark)

    Huber, Martin; Mellace, Giovanni

    2014-01-01

    Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction of these......Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction...... of these assumptions by applying the approach of Huber and Mellace (Testing instrument validity for LATE identification based on inequality moment constraints, 2011) (for testing instrument validity under treatment endogeneity) to the sample selection framework. We show that the exclusion restriction and additive...... separability imply two testable inequality constraints that come from both point identifying and bounding the outcome distribution of the subpopulation that is always selected/observed. We apply the tests to two variables for which the exclusion restriction is frequently invoked in female wage regressions: non...

  7. Simulation Study of Graphical Models Selection with Missing Data Based on E-MS Algorithm%基于E-MS算法的含缺失数据图模型选择的模拟研究

    Institute of Scientific and Technical Information of China (English)

    孙聚波; 徐平峰

    2015-01-01

    图模型是处理高维数据的有力工具,其中图模型的选择是统计推断的重要方面.利用E-MS算法进行缺失数据的图模型选择.E-MS算法是基于EM迭代思想的,把模型下的参数和模型选择参数统一为新的参数,这样模型选择就成为E-M迭代过程的一部分.最后给出了3,4,5变量情形下的模拟研究.%Graphical model is a powerful tool to deal with high dimensional data, and the graphical model selection is an important aspect of statistical inference.In this paper, the E-MS algorithm was used to select the appropriate graphical model with missing data.The E-MS algorithm is based on the idea of EM iteration, and combines the parameters of the model with the model selection parameters as the new parameters, so the model selection becomes the part of the E-M iterative process.Finally, a simulation study of 3, 4 and 5 variables was given.

  8. Inflation model selection meets dark radiation

    Science.gov (United States)

    Tram, Thomas; Vallance, Robert; Vennin, Vincent

    2017-01-01

    We investigate how inflation model selection is affected by the presence of additional free-streaming relativistic degrees of freedom, i.e. dark radiation. We perform a full Bayesian analysis of both inflation parameters and cosmological parameters taking reheating into account self-consistently. We compute the Bayesian evidence for a few representative inflation scenarios in both the standard ΛCDM model and an extension including dark radiation parametrised by its effective number of relativistic species Neff. Using a minimal dataset (Planck low-l polarisation, temperature power spectrum and lensing reconstruction), we find that the observational status of most inflationary models is unchanged. The exceptions are potentials such as power-law inflation that predict large values for the scalar spectral index that can only be realised when Neff is allowed to vary. Adding baryon acoustic oscillations data and the B-mode data from BICEP2/Keck makes power-law inflation disfavoured, while adding local measurements of the Hubble constant H0 makes power-law inflation slightly favoured compared to the best single-field plateau potentials. This illustrates how the dark radiation solution to the H0 tension would have deep consequences for inflation model selection.

  9. Stimulator Selection in SSVEP-Based Spatial Selective Attention Study.

    Science.gov (United States)

    Xie, Songyun; Liu, Chang; Obermayer, Klaus; Zhu, Fangshi; Wang, Linan; Xie, Xinzhou; Wang, Wei

    2016-01-01

    Steady-State Visual Evoked Potentials (SSVEPs) are widely used in spatial selective attention. In this process the two kinds of visual simulators, Light Emitting Diode (LED) and Liquid Crystal Display (LCD), are commonly used to evoke SSVEP. In this paper, the differences of SSVEP caused by these two stimulators in the study of spatial selective attention were investigated. Results indicated that LED could stimulate strong SSVEP component on occipital lobe, and the frequency of evoked SSVEP had high precision and wide range as compared to LCD. Moreover a significant difference between noticed and unnoticed frequencies in spectrum was observed whereas in LCD mode this difference was limited and selectable frequencies were also limited. Our experimental finding suggested that average classification accuracies among all the test subjects in our experiments were 0.938 and 0.853 in LED and LCD mode, respectively. These results indicate that LED simulator is appropriate for evoking the SSVEP for the study of spatial selective attention.

  10. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    of the selectivities of the constituent predicates. However, this independence assumption is more often than not wrong, and is considered to be the most common cause of sub-optimal query execution plans chosen by modern query optimizers. We take a step towards a principled and practical approach to performing...... cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss......Query optimizers rely on statistical models that succinctly describe the underlying data. Models are used to derive cardinality estimates for intermediate relations, which in turn guide the optimizer to choose the best query execution plan. The quality of the resulting plan is highly dependent...

  11. The Markowitz model for portfolio selection

    Directory of Open Access Journals (Sweden)

    MARIAN ZUBIA ZUBIAURRE

    2002-06-01

    Full Text Available Since its first appearance, The Markowitz model for portfolio selection has been a basic theoretical reference, opening several new development options. However, practically it has not been used among portfolio managers and investment analysts in spite of its success in the theoretical field. With our paper we would like to show how The Markowitz model may be of great help in real stock markets. Through an empirical study we want to verify the capability of Markowitz’s model to present portfolios with higher profitability and lower risk than the portfolio represented by IBEX-35 and IGBM indexes. Furthermore, we want to test suggested efficiency of these indexes as representatives of market theoretical-portfolio.

  12. Model selection for Poisson processes with covariates

    CERN Document Server

    Sart, Mathieu

    2011-01-01

    We observe $n$ inhomogeneous Poisson processes with covariates and aim at estimating their intensities. To handle this problem, we assume that the intensity of each Poisson process is of the form $s (\\cdot, x)$ where $x$ is the covariate and where $s$ is an unknown function. We propose a model selection approach where the models are used to approximate the multivariate function $s$. We show that our estimator satisfies an oracle-type inequality under very weak assumptions both on the intensities and the models. By using an Hellinger-type loss, we establish non-asymptotic risk bounds and specify them under various kind of assumptions on the target function $s$ such as being smooth or composite. Besides, we show that our estimation procedure is robust with respect to these assumptions.

  13. Entropic Priors and Bayesian Model Selection

    CERN Document Server

    Brewer, Brendon J

    2009-01-01

    We demonstrate that the principle of maximum relative entropy (ME), used judiciously, can ease the specification of priors in model selection problems. The resulting effect is that models that make sharp predictions are disfavoured, weakening the usual Bayesian "Occam's Razor". This is illustrated with a simple example involving what Jaynes called a "sure thing" hypothesis. Jaynes' resolution of the situation involved introducing a large number of alternative "sure thing" hypotheses that were possible before we observed the data. However, in more complex situations, it may not be possible to explicitly enumerate large numbers of alternatives. The entropic priors formalism produces the desired result without modifying the hypothesis space or requiring explicit enumeration of alternatives; all that is required is a good model for the prior predictive distribution for the data. This idea is illustrated with a simple rigged-lottery example, and we outline how this idea may help to resolve a recent debate amongst ...

  14. Clonal Selection Algorithm Based Iterative Learning Control with Random Disturbance

    Directory of Open Access Journals (Sweden)

    Yuanyuan Ju

    2013-01-01

    Full Text Available Clonal selection algorithm is improved and proposed as a method to solve optimization problems in iterative learning control. And a clonal selection algorithm based optimal iterative learning control algorithm with random disturbance is proposed. In the algorithm, at the same time, the size of the search space is decreased and the convergence speed of the algorithm is increased. In addition a model modifying device is used in the algorithm to cope with the uncertainty in the plant model. In addition a model is used in the algorithm cope with the uncertainty in the plant model. Simulations show that the convergence speed is satisfactory regardless of whether or not the plant model is precise nonlinear plants. The simulation test verify the controlled system with random disturbance can reached to stability by using improved iterative learning control law but not the traditional control law.

  15. Appropriate model selection methods for nonstationary generalized extreme value models

    Science.gov (United States)

    Kim, Hanbeen; Kim, Sooyoung; Shin, Hongjoon; Heo, Jun-Haeng

    2017-04-01

    Several evidences of hydrologic data series being nonstationary in nature have been found to date. This has resulted in the conduct of many studies in the area of nonstationary frequency analysis. Nonstationary probability distribution models involve parameters that vary over time. Therefore, it is not a straightforward process to apply conventional goodness-of-fit tests to the selection of an appropriate nonstationary probability distribution model. Tests that are generally recommended for such a selection include the Akaike's information criterion (AIC), corrected Akaike's information criterion (AICc), Bayesian information criterion (BIC), and likelihood ratio test (LRT). In this study, the Monte Carlo simulation was performed to compare the performances of these four tests, with regard to nonstationary as well as stationary generalized extreme value (GEV) distributions. Proper model selection ratios and sample sizes were taken into account to evaluate the performances of all the four tests. The BIC demonstrated the best performance with regard to stationary GEV models. In case of nonstationary GEV models, the AIC proved to be better than the other three methods, when relatively small sample sizes were considered. With larger sample sizes, the AIC, BIC, and LRT presented the best performances for GEV models which have nonstationary location and/or scale parameters, respectively. Simulation results were then evaluated by applying all four tests to annual maximum rainfall data of selected sites, as observed by the Korea Meteorological Administration.

  16. Models of microbiome evolution incorporating host and microbial selection.

    Science.gov (United States)

    Zeng, Qinglong; Wu, Steven; Sukumaran, Jeet; Rodrigo, Allen

    2017-09-25

    Numerous empirical studies suggest that hosts and microbes exert reciprocal selective effects on their ecological partners. Nonetheless, we still lack an explicit framework to model the dynamics of both hosts and microbes under selection. In a previous study, we developed an agent-based forward-time computational framework to simulate the neutral evolution of host-associated microbial communities in a constant-sized, unstructured population of hosts. These neutral models allowed offspring to sample microbes randomly from parents and/or from the environment. Additionally, the environmental pool of available microbes was constituted by fixed and persistent microbial OTUs and by contributions from host individuals in the preceding generation. In this paper, we extend our neutral models to allow selection to operate on both hosts and microbes. We do this by constructing a phenome for each microbial OTU consisting of a sample of traits that influence host and microbial fitnesses independently. Microbial traits can influence the fitness of hosts ("host selection") and the fitness of microbes ("trait-mediated microbial selection"). Additionally, the fitness effects of traits on microbes can be modified by their hosts ("host-mediated microbial selection"). We simulate the effects of these three types of selection, individually or in combination, on microbiome diversities and the fitnesses of hosts and microbes over several thousand generations of hosts. We show that microbiome diversity is strongly influenced by selection acting on microbes. Selection acting on hosts only influences microbiome diversity when there is near-complete direct or indirect parental contribution to the microbiomes of offspring. Unsurprisingly, microbial fitness increases under microbial selection. Interestingly, when host selection operates, host fitness only increases under two conditions: (1) when there is a strong parental contribution to microbial communities or (2) in the absence of a strong

  17. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romanach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using

  18. Selecting global climate models for regional climate change studies

    OpenAIRE

    Pierce, David W.; Barnett, Tim P.; Santer, Benjamin D.; Gleckler, Peter J.

    2009-01-01

    Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simula...

  19. QOS Aware Formalized Model for Semantic Web Service Selection

    Directory of Open Access Journals (Sweden)

    Divya Sachan

    2014-10-01

    Full Text Available Selecting the most relevant Web Service according to a client requirement is an onerous task, as innumerous number of functionally same Web Services(WS are listed in UDDI registry. WS are functionally same but their Quality and performance varies as per service providers. A web Service Selection Process involves two major points: Recommending the pertinent Web Service and avoiding unjustifiable web service. The deficiency in keyword based searching is that it doesn’t handle the client request accurately as keyword may have ambiguous meaning on different scenarios. UDDI and search engines all are based on keyword search, which are lagging behind on pertinent Web service selection. So the search mechanism must be incorporated with the Semantic behavior of Web Services. In order to strengthen this approach, the proposed model is incorporated with Quality of Services (QoS based Ranking of semantic web services.

  20. Ancestral process and diffusion model with selection

    CERN Document Server

    Mano, Shuhei

    2008-01-01

    The ancestral selection graph in population genetics introduced by Krone and Neuhauser (1997) is an analogue to the coalescent genealogy. The number of ancestral particles, backward in time, of a sample of genes is an ancestral process, which is a birth and death process with quadratic death and linear birth rate. In this paper an explicit form of the number of ancestral particle is obtained, by using the density of the allele frequency in the corresponding diffusion model obtained by Kimura (1955). It is shown that fixation is convergence of the ancestral process to the stationary measure. The time to fixation of an allele is studied in terms of the ancestral process.

  1. ModelOMatic: fast and automated model selection between RY, nucleotide, amino acid, and codon substitution models.

    Science.gov (United States)

    Whelan, Simon; Allen, James E; Blackburne, Benjamin P; Talavera, David

    2015-01-01

    Molecular phylogenetics is a powerful tool for inferring both the process and pattern of evolution from genomic sequence data. Statistical approaches, such as maximum likelihood and Bayesian inference, are now established as the preferred methods of inference. The choice of models that a researcher uses for inference is of critical importance, and there are established methods for model selection conditioned on a particular type of data, such as nucleotides, amino acids, or codons. A major limitation of existing model selection approaches is that they can only compare models acting upon a single type of data. Here, we extend model selection to allow comparisons between models describing different types of data by introducing the idea of adapter functions, which project aggregated models onto the originally observed sequence data. These projections are implemented in the program ModelOMatic and used to perform model selection on 3722 families from the PANDIT database, 68 genes from an arthropod phylogenomic data set, and 248 genes from a vertebrate phylogenomic data set. For the PANDIT and arthropod data, we find that amino acid models are selected for the overwhelming majority of alignments; with progressively smaller numbers of alignments selecting codon and nucleotide models, and no families selecting RY-based models. In contrast, nearly all alignments from the vertebrate data set select codon-based models. The sequence divergence, the number of sequences, and the degree of selection acting upon the protein sequences may contribute to explaining this variation in model selection. Our ModelOMatic program is fast, with most families from PANDIT taking fewer than 150 s to complete, and should therefore be easily incorporated into existing phylogenetic pipelines. ModelOMatic is available at https://code.google.com/p/modelomatic/.

  2. A novel manganese-dependent ATM-p53 signaling pathway is selectively impaired in patient-based neuroprogenitor and murine striatal models of Huntington's disease

    Science.gov (United States)

    Tidball, Andrew M.; Bryan, Miles R.; Uhouse, Michael A.; Kumar, Kevin K.; Aboud, Asad A.; Feist, Jack E.; Ess, Kevin C.; Neely, M. Diana; Aschner, Michael; Bowman, Aaron B.

    2015-01-01

    The essential micronutrient manganese is enriched in brain, especially in the basal ganglia. We sought to identify neuronal signaling pathways responsive to neurologically relevant manganese levels, as previous data suggested that alterations in striatal manganese handling occur in Huntington's disease (HD) models. We found that p53 phosphorylation at serine 15 is the most responsive cell signaling event to manganese exposure (of 18 tested) in human neuroprogenitors and a mouse striatal cell line. Manganese-dependent activation of p53 was severely diminished in HD cells. Inhibitors of ataxia telangiectasia mutated (ATM) kinase decreased manganese-dependent phosphorylation of p53. Likewise, analysis of ATM autophosphorylation and additional ATM kinase targets, H2AX and CHK2, support a role for ATM in the activation of p53 by manganese and that a defect in this process occurs in HD. Furthermore, the deficit in Mn-dependent activation of ATM kinase in HD neuroprogenitors was highly selective, as DNA damage and oxidative injury, canonical activators of ATM, did not show similar deficits. We assessed cellular manganese handling to test for correlations with the ATM-p53 pathway, and we observed reduced Mn accumulation in HD human neuroprogenitors and HD mouse striatal cells at manganese exposures associated with altered p53 activation. To determine if this phenotype contributes to the deficit in manganese-dependent ATM activation, we used pharmacological manipulation to equalize manganese levels between HD and control mouse striatal cells and rescued the ATM-p53 signaling deficit. Collectively, our data demonstrate selective alterations in manganese biology in cellular models of HD manifest in ATM-p53 signaling. PMID:25489053

  3. A novel manganese-dependent ATM-p53 signaling pathway is selectively impaired in patient-based neuroprogenitor and murine striatal models of Huntington's disease.

    Science.gov (United States)

    Tidball, Andrew M; Bryan, Miles R; Uhouse, Michael A; Kumar, Kevin K; Aboud, Asad A; Feist, Jack E; Ess, Kevin C; Neely, M Diana; Aschner, Michael; Bowman, Aaron B

    2015-04-01

    The essential micronutrient manganese is enriched in brain, especially in the basal ganglia. We sought to identify neuronal signaling pathways responsive to neurologically relevant manganese levels, as previous data suggested that alterations in striatal manganese handling occur in Huntington's disease (HD) models. We found that p53 phosphorylation at serine 15 is the most responsive cell signaling event to manganese exposure (of 18 tested) in human neuroprogenitors and a mouse striatal cell line. Manganese-dependent activation of p53 was severely diminished in HD cells. Inhibitors of ataxia telangiectasia mutated (ATM) kinase decreased manganese-dependent phosphorylation of p53. Likewise, analysis of ATM autophosphorylation and additional ATM kinase targets, H2AX and CHK2, support a role for ATM in the activation of p53 by manganese and that a defect in this process occurs in HD. Furthermore, the deficit in Mn-dependent activation of ATM kinase in HD neuroprogenitors was highly selective, as DNA damage and oxidative injury, canonical activators of ATM, did not show similar deficits. We assessed cellular manganese handling to test for correlations with the ATM-p53 pathway, and we observed reduced Mn accumulation in HD human neuroprogenitors and HD mouse striatal cells at manganese exposures associated with altered p53 activation. To determine if this phenotype contributes to the deficit in manganese-dependent ATM activation, we used pharmacological manipulation to equalize manganese levels between HD and control mouse striatal cells and rescued the ATM-p53 signaling deficit. Collectively, our data demonstrate selective alterations in manganese biology in cellular models of HD manifest in ATM-p53 signaling.

  4. Genetic signatures of natural selection in a model invasive ascidian

    Science.gov (United States)

    Lin, Yaping; Chen, Yiyong; Yi, Changho; Fong, Jonathan J.; Kim, Won; Rius, Marc; Zhan, Aibin

    2017-01-01

    Invasive species represent promising models to study species’ responses to rapidly changing environments. Although local adaptation frequently occurs during contemporary range expansion, the associated genetic signatures at both population and genomic levels remain largely unknown. Here, we use genome-wide gene-associated microsatellites to investigate genetic signatures of natural selection in a model invasive ascidian, Ciona robusta. Population genetic analyses of 150 individuals sampled in Korea, New Zealand, South Africa and Spain showed significant genetic differentiation among populations. Based on outlier tests, we found high incidence of signatures of directional selection at 19 loci. Hitchhiking mapping analyses identified 12 directional selective sweep regions, and all selective sweep windows on chromosomes were narrow (~8.9 kb). Further analyses indentified 132 candidate genes under selection. When we compared our genetic data and six crucial environmental variables, 16 putatively selected loci showed significant correlation with these environmental variables. This suggests that the local environmental conditions have left significant signatures of selection at both population and genomic levels. Finally, we identified “plastic” genomic regions and genes that are promising regions to investigate evolutionary responses to rapid environmental change in C. robusta. PMID:28266616

  5. Assessment of future changes in the maximum temperature at selected stations in Iran based on HADCM3 and CGCM3 models

    Science.gov (United States)

    Abbasnia, Mohsen; Tavousi, Taghi; Khosravi, Mahmood

    2016-08-01

    Identification and assessment of climate change in the next decades with the aim of appropriate environmental planning in order to adapt and mitigate its effects are quite necessary. In this study, maximum temperature changes of Iran were comparatively examined in two future periods (2041-2070 and 2071-2099) and based on the two general circulation model outputs (CGCM3 and HADCM3) and under existing emission scenarios (A2, A1B, B1 and B2). For this purpose, after examining the ability of statistical downscaling method of SDSM in simulation of the observational period (1981-2010), the daily maximum temperature of future decades was downscaled by considering the uncertainty in seven synoptic stations as representatives of climate in Iran. In uncertainty analysis related to model-scenarios, it was found that CGCM3 model under scenario B1 had the best performance about the simulation of future maximum temperature among all of the examined scenario-models. The findings also showed that the maximum temperature at study stations will be increased between 1°C and 2°C in the middle and the end of 21st century. Also this maximum temperature changes is more severe in the HADCM3 model than the CGCM3 model.

  6. Stimulator Selection in SSVEP-Based Spatial Selective Attention Study

    Directory of Open Access Journals (Sweden)

    Songyun Xie

    2016-01-01

    Full Text Available Steady-State Visual Evoked Potentials (SSVEPs are widely used in spatial selective attention. In this process the two kinds of visual simulators, Light Emitting Diode (LED and Liquid Crystal Display (LCD, are commonly used to evoke SSVEP. In this paper, the differences of SSVEP caused by these two stimulators in the study of spatial selective attention were investigated. Results indicated that LED could stimulate strong SSVEP component on occipital lobe, and the frequency of evoked SSVEP had high precision and wide range as compared to LCD. Moreover a significant difference between noticed and unnoticed frequencies in spectrum was observed whereas in LCD mode this difference was limited and selectable frequencies were also limited. Our experimental finding suggested that average classification accuracies among all the test subjects in our experiments were 0.938 and 0.853 in LED and LCD mode, respectively. These results indicate that LED simulator is appropriate for evoking the SSVEP for the study of spatial selective attention.

  7. Model-based geostatistics

    CERN Document Server

    Diggle, Peter J

    2007-01-01

    Model-based geostatistics refers to the application of general statistical principles of modeling and inference to geostatistical problems. This volume provides a treatment of model-based geostatistics and emphasizes on statistical methods and applications. It also features analyses of datasets from a range of scientific contexts.

  8. Linear regression model selection using p-values when the model dimension grows

    CERN Document Server

    Pokarowski, Piotr; Teisseyre, Paweł

    2012-01-01

    We consider a new criterion-based approach to model selection in linear regression. Properties of selection criteria based on p-values of a likelihood ratio statistic are studied for families of linear regression models. We prove that such procedures are consistent i.e. the minimal true model is chosen with probability tending to 1 even when the number of models under consideration slowly increases with a sample size. The simulation study indicates that introduced methods perform promisingly when compared with Akaike and Bayesian Information Criteria.

  9. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysi...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian.......Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...

  10. 基于声誉的供应链合作伙伴选择模型%Supply Chain Partner Selection Model Based on Reputation

    Institute of Scientific and Technical Information of China (English)

    卢志刚; 林卡

    2015-01-01

    为高效准确地寻找最佳供应链伙伴组合,提出一种供应链伙伴选择的多目标优化模型。以提高企业声誉稳定性、企业与盟主企业拟合度极大化,以及供应链伙伴组合总体声誉极大化为目标函数,利用保留精英策略的非支配遗传算法进行求解,以减少优秀企业在迭代过程中被淘汰的概率,并找到Pareto最优解。算例结果表明,与供应链伙伴选择独立决策模型以及TOPSIS模型相比,该模型不仅有助于选取稳定性更高的伙伴组合,并能实现供应链整体综合效用的最大化。%In order to find the best composition of supply chain partners efficiently and accurately, a multi-objective programming model for supply chain partner selection is proposed in this paper. Maximizing the enterprise’ s reputation stability,fitness,and the reputation value of the composition of supply chain partners are used as the objective functions;an elitist non-dominated sorting Genetic Algorithm( GA) is introduced to solve the problem to reduce the possibility of weeding out excellent enterprises in the iterative process,so as to find the Pareto optimal solution. Experimental results show that compared with independent decision model and TOPSIS model, the model not only can help to choose the stable partners,but also maximizes the comprehensive utility of the supply chain.

  11. A Genetic Algorithm-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Babatunde Oluleye

    2014-07-01

    Full Text Available This article details the exploration and application of Genetic Algorithm (GA for feature selection. Particularly a binary GA was used for dimensionality reduction to enhance the performance of the concerned classifiers. In this work, hundred (100 features were extracted from set of images found in the Flavia dataset (a publicly available dataset. The extracted features are Zernike Moments (ZM, Fourier Descriptors (FD, Lengendre Moments (LM, Hu 7 Moments (Hu7M, Texture Properties (TP and Geometrical Properties (GP. The main contributions of this article are (1 detailed documentation of the GA Toolbox in MATLAB and (2 the development of a GA-based feature selector using a novel fitness function (kNN-based classification error which enabled the GA to obtain a combinatorial set of feature giving rise to optimal accuracy. The results obtained were compared with various feature selectors from WEKA software and obtained better results in many ways than WEKA feature selectors in terms of classification accuracy

  12. An Integrated Structure for Supplier Selection and Configuration of Knowledge-Based Networks Using QFD, ANP, and Mixed-Integer Programming Model

    Directory of Open Access Journals (Sweden)

    M. Abbasi

    2013-01-01

    Full Text Available Today’s competitive world conditions and shortened product life cycles have led to the rise of attention towards new product development issue which can guarantee both growth and survival of organizations. The agility of new product development is directed by the efficiency and efficacy of knowledge management skills of an organization. A key issue in thorough success of such networks is the developed knowledge preservation amongst the members. Thus, it is important that reliable relations can be established between the members in order to promote further interactions. To do so, an integrated framework is developed in this paper to configure the new product development network so that sustainable collaborations can be maintained amongst the entities. The proposed framework consists of the network configuration in addition to the supplier selection phase. They are taken into consideration using a biobjective mathematical model in which incurred costs and suppliers' superiority determine the final configuration of the network. Finally, different numerical instances are solved to address the applicability of the proposed model.

  13. Spatial Fleming-Viot models with selection and mutation

    CERN Document Server

    Dawson, Donald A

    2014-01-01

    This book constructs a rigorous framework for analysing selected phenomena in evolutionary theory of populations arising due to the combined effects of migration, selection and mutation in a spatial stochastic population model, namely the evolution towards fitter and fitter types through punctuated equilibria. The discussion is based on a number of new methods, in particular multiple scale analysis, nonlinear Markov processes and their entrance laws, atomic measure-valued evolutions and new forms of duality (for state-dependent mutation and multitype selection) which are used to prove ergodic theorems in this context and are applicable for many other questions and renormalization analysis for a variety of phenomena (stasis, punctuated equilibrium, failure of naive branching approximations, biodiversity) which occur due to the combination of rare mutation, mutation, resampling, migration and selection and make it necessary to mathematically bridge the gap (in the limit) between time and space scales.

  14. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  15. Selecting an optimal mixed products using grey relationship model

    Directory of Open Access Journals (Sweden)

    Farshad Faezy Razi

    2013-06-01

    Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.

  16. Inflation Model Selection meets Dark Radiation

    CERN Document Server

    Tram, Thomas; Vennin, Vincent

    2016-01-01

    We investigate how inflation model selection is affected by the presence of additional free-streaming relativistic degrees of freedom, i.e. dark radiation. We perform a full Bayesian analysis of both inflation parameters and cosmological parameters taking reheating into account self-consistently. We compute the Bayesian evidence for a few representative inflation scenarios in both the standard $\\Lambda\\mathrm{CDM}$ model and an extension including dark radiation parametrised by its effective number of relativistic species $N_\\mathrm{eff}$. We find that the observational status of most inflationary models is unchanged, with the exception of potentials such as power-law inflation that predict a value for the scalar spectral index that is too large in $\\Lambda\\mathrm{CDM}$ but which can be accommodated when $N_\\mathrm{eff}$ is allowed to vary. In this case, cosmic microwave background data indicate that power-law inflation is one of the best models together with plateau potentials. However, contrary to plateau p...

  17. Feature selection with neighborhood entropy-based cooperative game theory.

    Science.gov (United States)

    Zeng, Kai; She, Kun; Niu, Xinzheng

    2014-01-01

    Feature selection plays an important role in machine learning and data mining. In recent years, various feature measurements have been proposed to select significant features from high-dimensional datasets. However, most traditional feature selection methods will ignore some features which have strong classification ability as a group but are weak as individuals. To deal with this problem, we redefine the redundancy, interdependence, and independence of features by using neighborhood entropy. Then the neighborhood entropy-based feature contribution is proposed under the framework of cooperative game. The evaluative criteria of features can be formalized as the product of contribution and other classical feature measures. Finally, the proposed method is tested on several UCI datasets. The results show that neighborhood entropy-based cooperative game theory model (NECGT) yield better performance than classical ones.

  18. Live imaging-based model selection reveals periodic regulation of the stochastic G1/S phase transition in vertebrate axial development.

    Directory of Open Access Journals (Sweden)

    Mayu Sugiyama

    2014-12-01

    Full Text Available In multicellular organism development, a stochastic cellular response is observed, even when a population of cells is exposed to the same environmental conditions. Retrieving the spatiotemporal regulatory mode hidden in the heterogeneous cellular behavior is a challenging task. The G1/S transition observed in cell cycle progression is a highly stochastic process. By taking advantage of a fluorescence cell cycle indicator, Fucci technology, we aimed to unveil a hidden regulatory mode of cell cycle progression in developing zebrafish. Fluorescence live imaging of Cecyil, a zebrafish line genetically expressing Fucci, demonstrated that newly formed notochordal cells from the posterior tip of the embryonic mesoderm exhibited the red (G1 fluorescence signal in the developing notochord. Prior to their initial vacuolation, these cells showed a fluorescence color switch from red to green, indicating G1/S transitions. This G1/S transition did not occur in a synchronous manner, but rather exhibited a stochastic process, since a mixed population of red and green cells was always inserted between newly formed red (G1 notochordal cells and vacuolating green cells. We termed this mixed population of notochordal cells, the G1/S transition window. We first performed quantitative analyses of live imaging data and a numerical estimation of the probability of the G1/S transition, which demonstrated the existence of a posteriorly traveling regulatory wave of the G1/S transition window. To obtain a better understanding of this regulatory mode, we constructed a mathematical model and performed a model selection by comparing the results obtained from the models with those from the experimental data. Our analyses demonstrated that the stochastic G1/S transition window in the notochord travels posteriorly in a periodic fashion, with doubled the periodicity of the neighboring paraxial mesoderm segmentation. This approach may have implications for the characterization of

  19. Fuzzification of ASAT's rule based aimpoint selection

    Science.gov (United States)

    Weight, Thomas H.

    1993-06-01

    The aimpoint algorithms being developed at Dr. Weight and Associates are based on the concept of fuzzy logic. This approach does not require a particular type of sensor data or algorithm type, but allows the user to develop a fuzzy logic algorithm based on existing aimpoint algorithms and models. This provides an opportunity for the user to upgrade an existing system design to achieve higher performance at minimal cost. Many projects have aimpoint algorithms which are based on 'crisp' logic rule based algorithms. These algorithms are sensitive to glint, corner reflectors, or intermittent thruster firings, and to uncertainties in the a priori estimates of angle of attack. If these projects are continued through to a demonstration involving a launch to hit a target, it is quite possible that the crisp logic approaches will need to be upgraded to handle these important error sources.

  20. A simple application of FIC to model selection

    CERN Document Server

    Wiggins, Paul A

    2015-01-01

    We have recently proposed a new information-based approach to model selection, the Frequentist Information Criterion (FIC), that reconciles information-based and frequentist inference. The purpose of this current paper is to provide a simple example of the application of this criterion and a demonstration of the natural emergence of model complexities with both AIC-like ($N^0$) and BIC-like ($\\log N$) scaling with observation number $N$. The application developed is deliberately simplified to make the analysis analytically tractable.

  1. A topic evolution model with sentiment and selective attention

    Science.gov (United States)

    Si, Xia-Meng; Wang, Wen-Dong; Zhai, Chun-Qing; Ma, Yan

    2017-04-01

    Topic evolution is a hybrid dynamics of information propagation and opinion interaction. The dynamics of opinion interaction is inherently interwoven with the dynamics of information propagation in the network, owing to the bidirectional influences between interaction and diffusion. The degree of sentiment determines if the topic can continue to spread from this node, and the selective attention determines the information flow direction and communicatee selection. For this end, we put forward a sentiment-based mixed dynamics model with selective attention, and applied the Bayesian updating rules on it. Our model can indirectly describe the isolated users who seem isolated from a topic due to some reasons even everybody around them has heard about it. Numerical simulations show that, more insiders initially and fewer simultaneous spreaders can lessen the extremism. To promote the topic diffusion or restrain the prevailing of extremism, fewer agents with constructive motivation and more agents with no involving motivation are encouraged.

  2. Robust model selection and the statistical classification of languages

    Science.gov (United States)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating

  3. Site selection assessment model for civil aerodrome based on catastrophe assessment method%基于突变理论的机场选址评价研究

    Institute of Scientific and Technical Information of China (English)

    李明捷; 石荣

    2011-01-01

    Because of most of the assessment method of airport site selection often used fuzzy evaluation to determine the index weights, which enhanced the subjectivity of the evaluation results. This paper applied the catastrophe method in the rationality and sustainability assessment of the civil aerodrome site selection. It decomposed and sorted the bottom indicators based on analyzing the influencing factors of airport site selection. After standardizing and quantifying the bottom indicators, it got the civil aerodrome catastrophe assessment values with rationality and sustainability by using normalization formulas. Aiming at the disadvantage of the catastrophe method with higher assessment values, this paper listed the conversion formulas and converted the catastrophe assessment values into the improved values by Matlab. At last, the feasibility, effectiveness and practicality of the method was proved by a practical example.%针对机场选址评价中大多采用模糊评价方法需要确定指标体系权重,使得评价结果带有较强主观性的问题.将突变理论应用到民用机场选址的合理性及可持续发展评价中来.在对影响机场选址因素进行分析的基础上进行各因素的分解排序,将底层评价指标进行标准量化后,运用突变理论的归一化公式进行计算,得到机场选址合理性及可持续性的突变评价值.针对突变评价值偏高的缺点,给出转换公式,运用Matlab 对评价值进行转换计算.实例计算验证了该方法的可行性、有效性和实用性.

  4. Astrophysical Model Selection in Gravitational Wave Astronomy

    Science.gov (United States)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  5. High-dimensional model estimation and model selection

    CERN Document Server

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  6. Supplier Selection in Virtual Enterprise Model of Manufacturing Supply Network

    Science.gov (United States)

    Kaihara, Toshiya; Opadiji, Jayeola F.

    The market-based approach to manufacturing supply network planning focuses on the competitive attitudes of various enterprises in the network to generate plans that seek to maximize the throughput of the network. It is this competitive behaviour of the member units that we explore in proposing a solution model for a supplier selection problem in convergent manufacturing supply networks. We present a formulation of autonomous units of the network as trading agents in a virtual enterprise network interacting to deliver value to market consumers and discuss the effect of internal and external trading parameters on the selection of suppliers by enterprise units.

  7. How many separable sources? Model selection in independent components analysis.

    Science.gov (United States)

    Woods, Roger P; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian.

  8. Dynamic Educational e-Content Selection Using Multiple Criteria in Web-Based Personalized Learning Environments.

    Science.gov (United States)

    Manouselis, Nikos; Sampson, Demetrios

    This paper focuses on the way a multi-criteria decision making methodology is applied in the case of agent-based selection of offered learning objects. The problem of selection is modeled as a decision making one, with the decision variables being the learner model and the learning objects' educational description. In this way, selection of…

  9. Heterogeneous Model of Local Selective Network Load Based on BSC Model%基于BSC模型的网络负载局部选择性异构模型

    Institute of Scientific and Technical Information of China (English)

    许莉

    2014-01-01

    为提高网络节点负载能力和网络整体安全性能,根据网络节点拓扑中有限节点资源具有局部选择性和多样安全性,提出一种基于“业务-服务-构件”BSC模型的网络节点拓扑异构安全建模思想。构造一种混合核函数,进行网络负载局部选择性最优模型构建。通过网络平台构建和仿真实验,结果表明,采用局部选择性异构设计,能在增加网络节点平均负载率的情况下可降低网络节点最大负载率从而使得网络负载均衡。异构后的网络节点安全承载能力增加了5%左右。网络中节点负载更趋均衡,保证网络使用性能和安全性能。%In order to improve the network nodes load capacity and overall safety performance, according to limited resource nodes in the network node topology with local selectivity and diverse security, network node topology modeling thought with network security was proposed based on business-service-component (BSC) model, a hybrid kernel function was construct-ed, optimal model of selective network load was established, the simulation platform was built and the simulation experi-ment was implemented. Simulation result shows that the new method can increase the average load rate and it can reduce the maximum load rate, reaching the network load balancing. The security bearing capacity is increased by about 5%, the network load is more balanced, and it has good value in ensuring the security of in network.

  10. Simultaneous data pre-processing and SVM classification model selection based on a parallel genetic algorithm applied to spectroscopic data of olive oils.

    Science.gov (United States)

    Devos, Olivier; Downey, Gerard; Duponchel, Ludovic

    2014-04-01

    Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates.

  11. SLAM: A Connectionist Model for Attention in Visual Selection Tasks.

    Science.gov (United States)

    Phaf, R. Hans; And Others

    1990-01-01

    The SeLective Attention Model (SLAM) performs visual selective attention tasks and demonstrates that object selection and attribute selection are both necessary and sufficient for visual selection. The SLAM is described, particularly with regard to its ability to represent an individual subject performing filtering tasks. (TJH)

  12. IMAGE SELECTION FOR 3D MEASUREMENT BASED ON NETWORK DESIGN

    Directory of Open Access Journals (Sweden)

    T. Fuse

    2015-05-01

    Full Text Available 3D models have been widely used by spread of many available free-software. On the other hand, enormous images can be easily acquired, and images are utilized for creating the 3D models recently. However, the creation of 3D models by using huge amount of images takes a lot of time and effort, and then efficiency for 3D measurement are required. In the efficiency strategy, the accuracy of the measurement is also required. This paper develops an image selection method based on network design that means surveying network construction. The proposed method uses image connectivity graph. By this, the image selection problem is regarded as combinatorial optimization problem and the graph cuts technique can be applied. Additionally, in the process of 3D reconstruction, low quality images and similarity images are extracted and removed. Through the experiments, the significance of the proposed method is confirmed. Potential to efficient and accurate 3D measurement is implied.

  13. Proceedings Tenth Workshop on Model Based Testing

    OpenAIRE

    Pakulin, Nikolay; Petrenko, Alexander K.; Schlingloff, Bernd-Holger

    2015-01-01

    The workshop is devoted to model-based testing of both software and hardware. Model-based testing uses models describing the required behavior of the system under consideration to guide such efforts as test selection and test results evaluation. Testing validates the real system behavior against models and checks that the implementation conforms to them, but is capable also to find errors in the models themselves. The intent of this workshop is to bring together researchers and users of model...

  14. Model-Based Integrated Methods for Quantitative Estimation of Soil Salinity from Hyperspectral Remote Sensing Data:A Case Study of Selected South African Soils

    Institute of Scientific and Technical Information of China (English)

    Z. E. MASHIMBYE; M. A. CHO; J. P.NELL; W. P. DE CLERCQ; A. VAN NIEKERK; D. P.TURNER

    2012-01-01

    Soil salinization is a land degradation process that leads to reduced agricultural yields.This study investigated the method that can best predict electrical conductivity (EC) in dry soils using individual bands,a normalized difference salinity index (NDSI),partial least squares regression (PLSR),and bagging PLSR.Soil spectral reflectance of dried,ground,and sieved soil samples containing varying amounts of EC was measured using an ASD FieldSpec spectrometer in a darkroom.Predictive models were computed using a training dataset.An independent validation dataset was used to validate the models.The results showed that good predictions could be made based on bagging PLSR using first derivative reflectance (validation R2 =0.85),PLSR using untransformed reflectance (validation R2 =0.70),NDSI (validation R2 =0.65),and the untransformed individual band at 2 257 nm (validation R2 =0.60)predictive models.These suggested the potential of inapping soil salinity using airborne and/or satellite hyperspectral data during dry seasons.

  15. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  16. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  17. Accurate model selection of relaxed molecular clocks in bayesian phylogenetics.

    Science.gov (United States)

    Baele, Guy; Li, Wai Lok Sibon; Drummond, Alexei J; Suchard, Marc A; Lemey, Philippe

    2013-02-01

    Recent implementations of path sampling (PS) and stepping-stone sampling (SS) have been shown to outperform the harmonic mean estimator (HME) and a posterior simulation-based analog of Akaike's information criterion through Markov chain Monte Carlo (AICM), in bayesian model selection of demographic and molecular clock models. Almost simultaneously, a bayesian model averaging approach was developed that avoids conditioning on a single model but averages over a set of relaxed clock models. This approach returns estimates of the posterior probability of each clock model through which one can estimate the Bayes factor in favor of the maximum a posteriori (MAP) clock model; however, this Bayes factor estimate may suffer when the posterior probability of the MAP model approaches 1. Here, we compare these two recent developments with the HME, stabilized/smoothed HME (sHME), and AICM, using both synthetic and empirical data. Our comparison shows reassuringly that MAP identification and its Bayes factor provide similar performance to PS and SS and that these approaches considerably outperform HME, sHME, and AICM in selecting the correct underlying clock model. We also illustrate the importance of using proper priors on a large set of empirical data sets.

  18. Efficiency of model selection criteria in flood frequency analysis

    Science.gov (United States)

    Calenda, G.; Volpi, E.

    2009-04-01

    The estimation of high flood quantiles requires the extrapolation of the probability distributions far beyond the usual sample length, involving high estimation uncertainties. The choice of the probability law, traditionally based on the hypothesis testing, is critical to this point. In this study the efficiency of different model selection criteria, seldom applied in flood frequency analysis, is investigated. The efficiency of each criterion in identifying the probability distribution of the hydrological extremes is evaluated by numerical simulations for different parent distributions, coefficients of variation and skewness, and sample sizes. The compared model selection procedures are the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), the Anderson Darling Criterion (ADC) recently discussed by Di Baldassarre et al. (2008) and Sample Quantile Criterion (SQC), recently proposed by the authors (Calenda et al., 2009). The SQC is based on the principle of maximising the probability density of the elements of the sample that are considered relevant to the problem, and takes into account both the accuracy and the uncertainty of the estimate. Since the stress is mainly on extreme events, the SQC involves upper-tail probabilities, where the effect of the model assumption is more critical. The proposed index is equal to the sum of logarithms of the inverse of the sample probability density of the observed quantiles. The definition of this index is based on the principle that the more centred is the sample value in respect to its density distribution (accuracy of the estimate) and the less spread is this distribution (uncertainty of the estimate), the greater is the probability density of the sample quantile. Thus, lower values of the index indicate a better performance of the distribution law. This criterion can operate the selection of the optimum distribution among competing probability models that are estimated using different samples. The

  19. Selective electromembrane extraction based on isoelectric point

    DEFF Research Database (Denmark)

    Huang, Chuixiu; Gjelstad, Astrid; Pedersen-Bjergaard, Stig

    2015-01-01

    For the first time, selective isolation of a target peptide based on the isoelectric point (pI) was achieved using a two-step electromembrane extraction (EME) approach with a thin flat membrane-based EME device. In this approach, step #1 was an extraction process, where both the target peptide...... angiotensin II antipeptide (AT2 AP, pI=5.13) and the matrix peptides (pI>5.13) angiotensin II (AT2), neurotensin (NT), angiotensin I (AT1) and leu-enkephalin (L-Enke) were all extracted as net positive species from the sample (pH 3.50), through a supported liquid membrane (SLM) of 1-nonanol diluted with 2......, and the target remained in the acceptor solution. The acceptor solution pH, the SLM composition, the extraction voltage, and the extraction time during the clean-up process (step #2) were important factors influencing the separation performance. An acceptor solution pH of 5.25 for the clean-up process slightly...

  20. Bayesian Model Selection for LISA Pathfinder

    CERN Document Server

    Karnesis, Nikolaos; Sopuerta, Carlos F; Gibert, Ferran; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Ferraioli, Luigi; Hewitson, Martin; Hueller, Mauro; Korsakova, Natalia; Plagnol, Eric; Vitale, and Stefano

    2013-01-01

    The main goal of the LISA Pathfinder (LPF) mission is to fully characterize the acceleration noise models and to test key technologies for future space-based gravitational-wave observatories similar to the LISA/eLISA concept. The Data Analysis (DA) team has developed complex three-dimensional models of the LISA Technology Package (LTP) experiment on-board LPF. These models are used for simulations, but more importantly, they will be used for parameter estimation purposes during flight operations. One of the tasks of the DA team is to identify the physical effects that contribute significantly to the properties of the instrument noise. A way of approaching to this problem is to recover the essential parameters of the LTP which describe the data. Thus, we want to define the simplest model that efficiently explains the observations. To do so, adopting a Bayesian framework, one has to estimate the so-called Bayes Factor between two competing models. In our analysis, we use three main different methods to estimate...

  1. MIS-based sensors with hydrogen selectivity

    Science.gov (United States)

    Li; ,Dongmei; Medlin, J. William; McDaniel, Anthony H.; Bastasz, Robert J.

    2008-03-11

    The invention provides hydrogen selective metal-insulator-semiconductor sensors which include a layer of hydrogen selective material. The hydrogen selective material can be polyimide layer having a thickness between 200 and 800 nm. Suitable polyimide materials include reaction products of benzophenone tetracarboxylic dianhydride 4,4-oxydianiline m-phenylene diamine and other structurally similar materials.

  2. 基于DOE参数筛选的SPH鸟体参数反演%Parameter Inversion of SPH Bird Model Based on DOE Parameter Selection

    Institute of Scientific and Technical Information of China (English)

    罗军; 刘长虹; 洪清泉; 鞠锋; 丁敏

    2012-01-01

    鸟体参数的准确性对鸟撞仿真精度有重大影响,参数反演可以克服人工试凑法的局限生,搜寻出合理的鸟体参数,提高鸟撞仿真精度.在Hyper Study多学科优化平台下,先采用正交实验设计方法挑选出对位移结果敏感的鸟体参数,以简化优化问题的复杂性和计算量,然后采用自适应响应面法进行鸟体的参数反演的多目标优化,优化目标为最小化鸟撞位置处仿真位移和实验位移的平方差.RADIOSS的求解结果表明,采用优化后的鸟体参数的鸟体模型的仿真曲线与实验曲线拟合效果大大提高.%The accuracy of the bird model parameters has a significant effect on bird strike simulation, and the inversion of physical parameters can overcome the limitations of artificial trial method finding out reasonable bird model parameters and improving the accuracy of bird strike simulation. Based on the Hyper Study,a multidisciplinary optimization platform,orthogonal design of experimental(DOE)method is introduced to identify the model parameters which are sensitive to displacement, to simplify the complexity of optimization and to reduce the computation.Then those parameters are optimized based on adaptive response surface method (ARSM),and the optimization objective is to minimize the squared differences of simulation displacement and experimental displacement on the bird strike points.The results solved by RADIOSS demonstrate that when bird model adoptes the optimized parameters ,the simulation curve and experimental curve fit better.

  3. Selection of key terrain attributes for SOC model

    DEFF Research Database (Denmark)

    Greve, Mogens Humlekrog; Adhikari, Kabindra; Chellasamy, Menaka

    was selected, total 2,514,820 data mining models were constructed by 71 differences grid from 12m to 2304m and 22 attributes, 21 attributes derived by DTM and the original elevation. Relative importance and usage of each attributes in every model were calculated. Comprehensive impact rates of each attribute...... (standh) are the first three key terrain attributes in 5-attributes-model in all resolutions, the rest 2 of 5 attributes are Normal High (NormalH) and Valley Depth (Vall_depth) at the resolution finer than 40m, and Elevation and Channel Base (Chnl_base) coarser than 40m. The models at pixels size at 88m......As an important component of the global carbon pool, soil organic carbon (SOC) plays an important role in the global carbon cycle. SOC pool is the basic information to carry out global warming research, and needs to sustainable use of land resources. Digital terrain attributes are often use...

  4. An Investigation of the Linkage between Technology-Based Activities and STEM Major Selection in 4-Year Postsecondary Institutions in the United States: Multilevel Structural Equation Modelling

    Science.gov (United States)

    Lee, Ahlam

    2015-01-01

    Among the disciplines of science, technology, engineering, and math (STEM), much attention has been paid to the influences of math- and science-related learning contexts on students' STEM major selection. However, the technology and engineering learning contexts that are linked to STEM major selection have been overlooked. In response, a…

  5. Selective refinement and selection of near-native models in protein structure prediction.

    Science.gov (United States)

    Zhang, Jiong; Barz, Bogdan; Zhang, Jingfen; Xu, Dong; Kosztin, Ioan

    2015-10-01

    In recent years in silico protein structure prediction reached a level where fully automated servers can generate large pools of near-native structures. However, the identification and further refinement of the best structures from the pool of models remain problematic. To address these issues, we have developed (i) a target-specific selective refinement (SR) protocol; and (ii) molecular dynamics (MD) simulation based ranking (SMDR) method. In SR the all-atom refinement of structures is accomplished via the Rosetta Relax protocol, subject to specific constraints determined by the size and complexity of the target. The best-refined models are selected with SMDR by testing their relative stability against gradual heating through all-atom MD simulations. Through extensive testing we have found that Mufold-MD, our fully automated protein structure prediction server updated with the SR and SMDR modules consistently outperformed its previous versions.

  6. Selective Activation of Resting-State Networks following Focal Stimulation in a Connectome-Based Network Model of the Human Brain

    Science.gov (United States)

    2016-01-01

    Abstract When the brain is stimulated, for example, by sensory inputs or goal-oriented tasks, the brain initially responds with activities in specific areas. The subsequent pattern formation of functional networks is constrained by the structural connectivity (SC) of the brain. The extent to which information is processed over short- or long-range SC is unclear. Whole-brain models based on long-range axonal connections, for example, can partly describe measured functional connectivity dynamics at rest. Here, we study the effect of SC on the network response to stimulation. We use a human whole-brain network model comprising long- and short-range connections. We systematically activate each cortical or thalamic area, and investigate the network response as a function of its short- and long-range SC. We show that when the brain is operating at the edge of criticality, stimulation causes a cascade of network recruitments, collapsing onto a smaller space that is partly constrained by SC. We found both short- and long-range SC essential to reproduce experimental results. In particular, the stimulation of specific areas results in the activation of one or more resting-state networks. We suggest that the stimulus-induced brain activity, which may indicate information and cognitive processing, follows specific routes imposed by structural networks explaining the emergence of functional networks. We provide a lookup table linking stimulation targets and functional network activations, which potentially can be useful in diagnostics and treatments with brain stimulation. PMID:27752540

  7. Model selection for the extraction of movement primitives.

    Science.gov (United States)

    Endres, Dominik M; Chiovetto, Enrico; Giese, Martin A

    2013-01-01

    A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA), independent component analysis (ICA), anechoic demixing, and the time-varying synergy model (d'Avella and Tresch, 2002). However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model. We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria [Bayesian information criterion, BIC (Schwarz, 1978) and the Akaike Information Criterion (AIC) (Akaike, 1974)]. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.

  8. Model selection for the extraction of movement primitives

    Directory of Open Access Journals (Sweden)

    Dominik M Endres

    2013-12-01

    Full Text Available A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA,independent component analysis (ICA, anechoic demixing, and the time-varying synergy model. However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model.We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria (Bayesian information criterion, BIC and the Akaike Information Criterion (AIC. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.

  9. Ensemble feature selection integrating elitist roles and quantum game model

    Institute of Scientific and Technical Information of China (English)

    Weiping Ding; Jiandong Wang; Zhijin Guan; Quan Shi

    2015-01-01

    To accelerate the selection process of feature subsets in the rough set theory (RST), an ensemble elitist roles based quantum game (EERQG) algorithm is proposed for feature selec-tion. Firstly, the multilevel elitist roles based dynamics equilibrium strategy is established, and both immigration and emigration of elitists are able to be self-adaptive to balance between exploration and exploitation for feature selection. Secondly, the utility matrix of trust margins is introduced to the model of multilevel elitist roles to enhance various elitist roles’ performance of searching the optimal feature subsets, and the win-win utility solutions for feature selec-tion can be attained. Meanwhile, a novel ensemble quantum game strategy is designed as an intriguing exhibiting structure to perfect the dynamics equilibrium of multilevel elitist roles. Final y, the en-semble manner of multilevel elitist roles is employed to achieve the global minimal feature subset, which wil greatly improve the fea-sibility and effectiveness. Experiment results show the proposed EERQG algorithm has superiority compared to the existing feature selection algorithms.

  10. Fuzzy Programming Models for Vendor Selection Problem in a Supply Chain

    Institute of Scientific and Technical Information of China (English)

    WANG Junyan; ZHAO Ruiqing; TANG Wansheng

    2008-01-01

    This paper characterizes quality, budget, and demand as fuzzy variables in a fuzzy vendor selec-tion expected value model and a fuzzy vendor selection chance-constrained programming model, to maxi-mize the total quality level. The two models have distinct advantages over existing methods for selecting vendors in fuzzy environments. A genetic algorithm based on fuzzy simulations is designed to solve these two models. Numerical examples show the effectiveness of the algorithm.

  11. Stationary solutions for metapopulation Moran models with mutation and selection

    Science.gov (United States)

    Constable, George W. A.; McKane, Alan J.

    2015-03-01

    We construct an individual-based metapopulation model of population genetics featuring migration, mutation, selection, and genetic drift. In the case of a single "island," the model reduces to the Moran model. Using the diffusion approximation and time-scale separation arguments, an effective one-variable description of the model is developed. The effective description bears similarities to the well-mixed Moran model with effective parameters that depend on the network structure and island sizes, and it is amenable to analysis. Predictions from the reduced theory match the results from stochastic simulations across a range of parameters. The nature of the fast-variable elimination technique we adopt is further studied by applying it to a linear system, where it provides a precise description of the slow dynamics in the limit of large time-scale separation.

  12. Feature selection and survival modeling in The Cancer Genome Atlas

    Directory of Open Access Journals (Sweden)

    Kim H

    2013-09-01

    Full Text Available Hyunsoo Kim,1 Markus Bredel2 1Department of Pathology, The University of Alabama at Birmingham, Birmingham, AL, USA; 2Department of Radiation Oncology, and Comprehensive Cancer Center, The University of Alabama at Birmingham, Birmingham, AL, USA Purpose: Personalized medicine is predicated on the concept of identifying subgroups of a common disease for better treatment. Identifying biomarkers that predict disease subtypes has been a major focus of biomedical science. In the era of genome-wide profiling, there is controversy as to the optimal number of genes as an input of a feature selection algorithm for survival modeling. Patients and methods: The expression profiles and outcomes of 544 patients were retrieved from The Cancer Genome Atlas. We compared four different survival prediction methods: (1 1-nearest neighbor (1-NN survival prediction method; (2 random patient selection method and a Cox-based regression method with nested cross-validation; (3 least absolute shrinkage and selection operator (LASSO optimization using whole-genome gene expression profiles; or (4 gene expression profiles of cancer pathway genes. Results: The 1-NN method performed better than the random patient selection method in terms of survival predictions, although it does not include a feature selection step. The Cox-based regression method with LASSO optimization using whole-genome gene expression data demonstrated higher survival prediction power than the 1-NN method, but was outperformed by the same method when using gene expression profiles of cancer pathway genes alone. Conclusion: The 1-NN survival prediction method may require more patients for better performance, even when omitting censored data. Using preexisting biological knowledge for survival prediction is reasonable as a means to understand the biological system of a cancer, unless the analysis goal is to identify completely unknown genes relevant to cancer biology. Keywords: brain, feature selection

  13. Variable selection based cotton bollworm odor spectroscopic detection

    Science.gov (United States)

    Lü, Chengxu; Gai, Shasha; Luo, Min; Zhao, Bo

    2016-10-01

    Aiming at rapid automatic pest detection based efficient and targeting pesticide application and shooting the trouble of reflectance spectral signal covered and attenuated by the solid plant, the possibility of near infrared spectroscopy (NIRS) detection on cotton bollworm odor is studied. Three cotton bollworm odor samples and 3 blank air gas samples were prepared. Different concentrations of cotton bollworm odor were prepared by mixing the above gas samples, resulting a calibration group of 62 samples and a validation group of 31 samples. Spectral collection system includes light source, optical fiber, sample chamber, spectrometer. Spectra were pretreated by baseline correction, modeled with partial least squares (PLS), and optimized by genetic algorithm (GA) and competitive adaptive reweighted sampling (CARS). Minor counts differences are found among spectra of different cotton bollworm odor concentrations. PLS model of all the variables was built presenting RMSEV of 14 and RV2 of 0.89, its theory basis is insect volatilizes specific odor, including pheromone and allelochemics, which are used for intra-specific and inter-specific communication and could be detected by NIR spectroscopy. 28 sensitive variables are selected by GA, presenting the model performance of RMSEV of 14 and RV2 of 0.90. Comparably, 8 sensitive variables are selected by CARS, presenting the model performance of RMSEV of 13 and RV2 of 0.92. CARS model employs only 1.5% variables presenting smaller error than that of all variable. Odor gas based NIR technique shows the potential for cotton bollworm detection.

  14. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  15. The Monte Carlo validation framework for the discriminant partial least squares model extended with variable selection methods applied to authenticity studies of Viagra® based on chromatographic impurity profiles.

    Science.gov (United States)

    Krakowska, B; Custers, D; Deconinck, E; Daszykowski, M

    2016-02-07

    The aim of this work was to develop a general framework for the validation of discriminant models based on the Monte Carlo approach that is used in the context of authenticity studies based on chromatographic impurity profiles. The performance of the validation approach was applied to evaluate the usefulness of the diagnostic logic rule obtained from the partial least squares discriminant model (PLS-DA) that was built to discriminate authentic Viagra® samples from counterfeits (a two-class problem). The major advantage of the proposed validation framework stems from the possibility of obtaining distributions for different figures of merit that describe the PLS-DA model such as, e.g., sensitivity, specificity, correct classification rate and area under the curve in a function of model complexity. Therefore, one can quickly evaluate their uncertainty estimates. Moreover, the Monte Carlo model validation allows balanced sets of training samples to be designed, which is required at the stage of the construction of PLS-DA and is recommended in order to obtain fair estimates that are based on an independent set of samples. In this study, as an illustrative example, 46 authentic Viagra® samples and 97 counterfeit samples were analyzed and described by their impurity profiles that were determined using high performance liquid chromatography with photodiode array detection and further discriminated using the PLS-DA approach. In addition, we demonstrated how to extend the Monte Carlo validation framework with four different variable selection schemes: the elimination of uninformative variables, the importance of a variable in projections, selectivity ratio and significance multivariate correlation. The best PLS-DA model was based on a subset of variables that were selected using the variable importance in the projection approach. For an independent test set, average estimates with the corresponding standard deviation (based on 1000 Monte Carlo runs) of the correct

  16. Modeling selective attention using a neuromorphic analog VLSI device.

    Science.gov (United States)

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  17. Autoregressive model selection with simultaneous sparse coefficient estimation

    CERN Document Server

    Sang, Hailin

    2011-01-01

    In this paper we propose a sparse coefficient estimation procedure for autoregressive (AR) models based on penalized conditional maximum likelihood. The penalized conditional maximum likelihood estimator (PCMLE) thus developed has the advantage of performing simultaneous coefficient estimation and model selection. Mild conditions are given on the penalty function and the innovation process, under which the PCMLE satisfies a strong consistency, local $N^{-1/2}$ consistency, and oracle property, respectively, where N is sample size. Two penalty functions, least absolute shrinkage and selection operator (LASSO) and smoothly clipped average deviation (SCAD), are considered as examples, and SCAD is shown to have better performances than LASSO. A simulation study confirms our theoretical results. At the end, we provide an application of our method to a historical price data of the US Industrial Production Index for consumer goods, and the result is very promising.

  18. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.

    2001-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  19. Pay-based Screening Mechanism: Personnel Selection in the View of Economics Theory

    Institute of Scientific and Technical Information of China (English)

    刘帮成; 唐宁玉

    2003-01-01

    Based on economic theories,the paper studies the personnel selection at the asymmetric job market using signaling and screening model.The authors hold the opinion that an organization can screen the candidates'signaling based on the self-selection principle by providing an apropriate compensation choice.A pay-based screening mechanism is qualified applicants and retain the excellent applicants.

  20. The detection of observations possibly influential for model selection

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1991-01-01

    textabstractModel selection can involve several variables and selection criteria. A simple method to detect observations possibly influential for model selection is proposed. The potentials of this method are illustrated with three examples, each of which is taken from related studies.

  1. Gravitational lens models based on submillimeter array imaging of Herschel -selected strongly lensed sub-millimeter galaxies at z > 1.5

    Energy Technology Data Exchange (ETDEWEB)

    Bussmann, R. S.; Gurwell, M. A. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Pérez-Fournon, I. [Instituto de Astrofísica de Canarias (IAC), E-38200 La Laguna, Tenerife (Spain); Amber, S. [Department of Physical Sciences, The Open University, Milton Keynes MK7 6AA (United Kingdom); Calanog, J.; De Bernardis, F.; Wardlow, J. [Department of Physics and Astronomy, University of California, Irvine, CA 92697 (United States); Dannerbauer, H. [Laboratoire AIM-Paris-Saclay, CEA/DSM/Irfu-CNRS-Université Paris Diderot, CE-Saclay, pt courrier 131, F-91191 Gif-sur-Yvette (France); Fu, Hai [Department of Physics and Astronomy, The University of Iowa, 203 Van Allen Hall, Iowa City, IA 52242 (United States); Harris, A. I. [Department of Astronomy, University of Maryland, College Park, MD 20742-2421 (United States); Krips, M. [Institut de RadioAstronomie Millimétrique, 300 Rue de la Piscine, Domaine Universitaire, 38406 Saint Martin d' Hères (France); Lapi, A. [Department Fisica, Univ. Tor Vergata, Via Ricerca Scientifica 1, 00133 Rome, Italy and SISSA, Via Bonomea 265, 34136 Trieste (Italy); Maiolino, R. [Cavendish Laboratory, University of Cambridge, 19 J.J. Thomson Ave., Cambridge CB3 OHE (United Kingdom); Omont, A. [Institut d' Astrophysique de Paris, UMR 7095, CNRS, UPMC Univ. Paris 06, 98bis boulevard Arago, F-75014 Paris (France); Riechers, D. [Department of Astronomy, Space Science Building, Cornell University, Ithaca, NY 14853-6801 (United States); Baker, A. J. [Department of Physics and Astronomy, Rutgers, The State University of New Jersey, 136 Frelinghuysen Rd, Piscataway, NJ 08854 (United States); Birkinshaw, M. [HH Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Bock, J. [California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125 (United States); and others

    2013-12-10

    Strong gravitational lenses are now being routinely discovered in wide-field surveys at (sub-)millimeter wavelengths. We present Submillimeter Array (SMA) high-spatial resolution imaging and Gemini-South and Multiple Mirror Telescope optical spectroscopy of strong lens candidates discovered in the two widest extragalactic surveys conducted by the Herschel Space Observatory: the Herschel-Astrophysical Terahertz Large Area Survey (H-ATLAS) and the Herschel Multi-tiered Extragalactic Survey (HerMES). From a sample of 30 Herschel sources with S {sub 500} > 100 mJy, 21 are strongly lensed (i.e., multiply imaged), 4 are moderately lensed (i.e., singly imaged), and the remainder require additional data to determine their lensing status. We apply a visibility-plane lens modeling technique to the SMA data to recover information about the masses of the lenses as well as the intrinsic (i.e., unlensed) sizes (r {sub half}) and far-infrared luminosities (L {sub FIR}) of the lensed submillimeter galaxies (SMGs). The sample of lenses comprises primarily isolated massive galaxies, but includes some groups and clusters as well. Several of the lenses are located at z {sub lens} > 0.7, a redshift regime that is inaccessible to lens searches based on Sloan Digital Sky Survey spectroscopy. The lensed SMGs are amplified by factors that are significantly below statistical model predictions given the 500 μm flux densities of our sample. We speculate that this may reflect a deficiency in our understanding of the intrinsic sizes and luminosities of the brightest SMGs. The lensed SMGs span nearly one decade in L {sub FIR} (median L {sub FIR} = 7.9 × 10{sup 12} L {sub ☉}) and two decades in FIR luminosity surface density (median Σ{sub FIR} = 6.0 × 10{sup 11} L {sub ☉} kpc{sup –2}). The strong lenses in this sample and others identified via (sub-)mm surveys will provide a wealth of information regarding the astrophysics of galaxy formation and evolution over a wide range in redshift.

  2. Model Selection Through Sparse Maximum Likelihood Estimation

    CERN Document Server

    Banerjee, Onureena; D'Aspremont, Alexandre

    2007-01-01

    We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...

  3. A Network Intrusion Detection Model Based on Data Ming and Feature Selection Schemes%基于数据挖掘和特征选择的入侵检测模型

    Institute of Scientific and Technical Information of China (English)

    康世瑜

    2011-01-01

    提出了一种基于SVM特征选择和C4.5数据挖掘算法的高效入侵检测模型.通过使用该模型对经过特征提取后的攻击数据的训练学习,可以有效地识别各种入侵,并提高检测速度.在经典的KDD 1999入侵检测数据集上的测试说明:该数据挖掘模型能够高效地对攻击模式进行训练学习,能够采用选择的特征正确有效地检测网络攻击.%This paper proposes a kind of intrusion detection model based on C4.5 data mining algorithm and SVM(correlation-based feature selection) based feature selection mechanism,which can effectively detect several types of attacks using the process of feature selection and attack feature training.The experiments on classic KDD 1999 intrusion dataset demonstrate our model is accurate and effective.

  4. A qualitative model structure sensitivity analysis method to support model selection

    Science.gov (United States)

    Van Hoey, S.; Seuntjens, P.; van der Kwast, J.; Nopens, I.

    2014-11-01

    The selection and identification of a suitable hydrological model structure is a more challenging task than fitting parameters of a fixed model structure to reproduce a measured hydrograph. The suitable model structure is highly dependent on various criteria, i.e. the modeling objective, the characteristics and the scale of the system under investigation and the available data. Flexible environments for model building are available, but need to be assisted by proper diagnostic tools for model structure selection. This paper introduces a qualitative method for model component sensitivity analysis. Traditionally, model sensitivity is evaluated for model parameters. In this paper, the concept is translated into an evaluation of model structure sensitivity. Similarly to the one-factor-at-a-time (OAT) methods for parameter sensitivity, this method varies the model structure components one at a time and evaluates the change in sensitivity towards the output variables. As such, the effect of model component variations can be evaluated towards different objective functions or output variables. The methodology is presented for a simple lumped hydrological model environment, introducing different possible model building variations. By comparing the effect of changes in model structure for different model objectives, model selection can be better evaluated. Based on the presented component sensitivity analysis of a case study, some suggestions with regard to model selection are formulated for the system under study: (1) a non-linear storage component is recommended, since it ensures more sensitive (identifiable) parameters for this component and less parameter interaction; (2) interflow is mainly important for the low flow criteria; (3) excess infiltration process is most influencing when focussing on the lower flows; (4) a more simple routing component is advisable; and (5) baseflow parameters have in general low sensitivity values, except for the low flow criteria.

  5. Selective experimental review of the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Bloom, E.D.

    1985-02-01

    Before disussing experimental comparisons with the Standard Model, (S-M) it is probably wise to define more completely what is commonly meant by this popular term. This model is a gauge theory of SU(3)/sub f/ x SU(2)/sub L/ x U(1) with 18 parameters. The parameters are ..cap alpha../sub s/, ..cap alpha../sub qed/, theta/sub W/, M/sub W/ (M/sub Z/ = M/sub W//cos theta/sub W/, and thus is not an independent parameter), M/sub Higgs/; the lepton masses, M/sub e/, M..mu.., M/sub r/; the quark masses, M/sub d/, M/sub s/, M/sub b/, and M/sub u/, M/sub c/, M/sub t/; and finally, the quark mixing angles, theta/sub 1/, theta/sub 2/, theta/sub 3/, and the CP violating phase delta. The latter four parameters appear in the quark mixing matrix for the Kobayashi-Maskawa and Maiani forms. Clearly, the present S-M covers an enormous range of physics topics, and the author can only lightly cover a few such topics in this report. The measurement of R/sub hadron/ is fundamental as a test of the running coupling constant ..cap alpha../sub s/ in QCD. The author will discuss a selection of recent precision measurements of R/sub hadron/, as well as some other techniques for measuring ..cap alpha../sub s/. QCD also requires the self interaction of gluons. The search for the three gluon vertex may be practically realized in the clear identification of gluonic mesons. The author will present a limited review of recent progress in the attempt to untangle such mesons from the plethora q anti q states of the same quantum numbers which exist in the same mass range. The electroweak interactions provide some of the strongest evidence supporting the S-M that exists. Given the recent progress in this subfield, and particularly with the discovery of the W and Z bosons at CERN, many recent reviews obviate the need for further discussion in this report. In attempting to validate a theory, one frequently searches for new phenomena which would clearly invalidate it. 49 references, 28 figures.

  6. 基于模糊空间距离的组合投资模型及应用研究%Research of Portfolio Selection Model and Its Application Based on Fuzzy Space Distance

    Institute of Scientific and Technical Information of China (English)

    付云鹏; 马树才; 宋琪

    2012-01-01

    Taking the fuzzy space distance as a pointcut, this paper gives the definition of variance when the random variable is fuzzy number, and takes this variance as the risk measurement standard when the random variables is fuzzy number. This paper takes the extended operation of fuzzy number as the definition of returns measurement standard, and establishes portfolio selection model based on fuzzy space distance. This paper transforms the fuzzy constraints into real constraints to solve the model. Finally, The paper takes China securities market as an example, and the application of this model is given, and compares it with traditional mean-variance model. Result shows that portfolio selection model based on fuzzy space distance is a reasonable extension of traditional mean-variance model.%本文以模糊空间中的距离为切入点,给出随机变量取值为模糊数时方差的定义,并将其作为投资收益率为模糊数时投资风险的度量;此外,以模糊数的扩张运算定义随机变量取值为模糊数时的均值,并将其作为投资预期收益的度量,建立基于模糊空间距离的组合投资决策模型,将模型的模糊约束条件利用模糊数的排序关系转化为实数之间的关系来求解模型。最后结合实例说明该模型的应用,并与传统模型进行对比,结果表明该模型是传统模型的合理推广。

  7. Development of in Silico Models for Predicting P-Glycoprotein Inhibitors Based on a Two-Step Approach for Feature Selection and Its Application to Chinese Herbal Medicine Screening.

    Science.gov (United States)

    Yang, Ming; Chen, Jialei; Shi, Xiufeng; Xu, Liwen; Xi, Zhijun; You, Lisha; An, Rui; Wang, Xinhong

    2015-10-01

    P-glycoprotein (P-gp) is regarded as an important factor in determining the ADMET (absorption, distribution, metabolism, elimination, and toxicity) characteristics of drugs and drug candidates. Successful prediction of P-gp inhibitors can thus lead to an improved understanding of the underlying mechanisms of both changes in the pharmacokinetics of drugs and drug-drug interactions. Therefore, there has been considerable interest in the development of in silico modeling of P-gp inhibitors in recent years. Considering that a large number of molecular descriptors are used to characterize diverse structural moleculars, efficient feature selection methods are required to extract the most informative predictors. In this work, we constructed an extensive available data set of 2428 molecules that includes 1518 P-gp inhibitors and 910 P-gp noninhibitors from multiple resources. Importantly, a two-step feature selection approach based on a genetic algorithm and a greedy forward-searching algorithm was employed to select the minimum set of the most informative descriptors that contribute to the prediction of P-gp inhibitors. To determine the best machine learning algorithm, 18 classifiers coupled with the feature selection method were compared. The top three best-performing models (flexible discriminant analysis, support vector machine, and random forest) and their ensemble model using respectively only 3, 9, 7, and 14 descriptors achieve an overall accuracy of 83.2%-86.7% for the training set containing 1040 compounds, an overall accuracy of 82.3%-85.5% for the test set containing 1039 compounds, and a prediction accuracy of 77.4%-79.9% for the external validation set containing 349 compounds. The models were further extensively validated by DrugBank database (1890 compounds). The proposed models are competitive with and in some cases better than other published models in terms of prediction accuracy and minimum number of descriptors. Applicability domain then was addressed

  8. 基于组织战略导向的项目组合选择模型研究%Research of Project Portfolio Selection Model Based on Organizational Strategy

    Institute of Scientific and Technical Information of China (English)

    王亚萍

    2015-01-01

    Based on the theory of the existing research of the project portfolio management, the paper analyzes the key factors of the project portfolio selection, and evaluates them using the fuzzy comprehensive evaluation method. Finally, the paper builds a project portfolio choice integer programming model based on 0/1 integer programming, which is based on the organizations strategies.%基于对项目组合管理现有理论的研究,对项目组合选择的关键影响因素进行分析,并对其关键影响因素运用模糊综合评价法进行评价,最后基于0/1整数规划构建项目组合选择的整数规划模型,以基于组织战略导向来选择项目.

  9. 基于状态监测的配电网可靠性检修选择模型%Reliability-Centered Maintenance Selection Model for Distribution Network Based on Condition Monitoring

    Institute of Scientific and Technical Information of China (English)

    黄嘉健; 王昌照; 郑文杰; 汪隆君

    2015-01-01

    In allusion to the subjective selection of maintenance objects and historical data based reliability assessment, a reliability-centered maintenance selection model for distribution network based on condition monitoring is put forward, and the solving algorithm is given. According to the quantitative relationship between condition monitoring variables of equipment and its failure rate, a multi-objective reliability-centered maintenance selection model, which maximizes variation of system average interruption frequency, system average interruption duration and expected energy not supplied with the constraints of budgets and human resources, is established; the normalized normal constraint method is employed to solve the model, and obtains fast and precisely Pareto frontier where the distribution of optimal solutions becomes uniform; the fuzzy selection strategy is used to determine the best compromise solution. The numerical results of RBTS-BUS6 system validate the effectiveness of the proposed reliability-centered maintenance selection model and the solving algorithm.%针对配电网检修的对象选取主观性强、可靠性评估采用历史平均数据等问题,提出基于状态监测的配电网可靠性检修选择模型及算法。根据设备状态监测量与故障率的关系,建立了以检修前后系统平均停电频率、系统平均停电时间、系统缺电量的变化量最大为目标,以预算成本和人力资源为约束条件的多目标配电网可靠性检修选择模型,基于规格化法向约束方法高效准确获得完整且均匀分布的Pareto前沿,采用模糊选择策略确定最优折中解。对RBTS-BUS6系统的计算表明,所提出模型及算法能够有效求解配电网可靠性检修问题。

  10. Dealing with selection bias in educational transition models

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier

    2011-01-01

    This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational...... transitions to be correlated across transitions. We use simulated and real data to illustrate how the BPSM improves on the traditional Mare model in terms of correcting for selection bias and providing credible estimates of the effect of family background on educational success. We conclude that models which...... account for selection on unobserved variables and high-quality data are both required in order to estimate credible educational transition models....

  11. 基于KPCA子空间虚假邻点判别的非线性建模的变量选择%Variable Selection for Nonlinear Modeling Based on False Nearest Neighbours in KPCA Subspace

    Institute of Scientific and Technical Information of China (English)

    李太福; 易军; 苏盈盈; 胡文金; 高婷

    2012-01-01

    特征变量选择技术是非线性系统建模过程中降低信息冗余和提高精度的有效方法.提出一种结合核主成分分析法(Kernel principal components analysis,KPCA)与虚假最近邻点法(False nearest neighbor,FNN)的变量选择法.引入核方法,将非线性原始数据映射到线性空间,再采用主成分分析法有效合理地消除因子之间的多重共线性,受混沌相空间虚假最近邻点法的启示,通过计算原始数据在KPCA子空间中投影的距离,判断其对主导变基的解释能力,由此进行变量的选择该方法用氢氰酸生产工艺工程中的非线性模型验证,并与全参数模型进行比较,结果显示该方法有良好的变量选择能力.因此,该研究为非线性系统建模的变量选择方法提供一种新方法.%Selection of secondary variables is an effective way to reduce redundant information and to improve efficiency in nonlinear system modeling. A novel method based on kernel principal components analysis (KPCA) and false nearest neighbor method (FNN) is proposed on select the most suitable secondary process variables used as nonlinear modeling inputs. In the proposed approach, the KPCA can be employed to overcome difficulties encountered with the existing multicollinearity between the factors. In the new KPCA feature subspace, it is inspired by FNN that interpretation of primary variable would be estimated by calculating the variables' map distance in the KPCA space to select secondary variables. Nonlinear model form the production processing of hydrogen cyanide is used to verify the validity of the method, and compared with the fully parametric model. The results show that the method is effective and suitable for variable selection. Therefore, a new method is provided for the variable selection of nonlinear system modeling.

  12. Model for personal computer system selection.

    Science.gov (United States)

    Blide, L

    1987-12-01

    Successful computer software and hardware selection is best accomplished by following an organized approach such as the one described in this article. The first step is to decide what you want to be able to do with the computer. Secondly, select software that is user friendly, well documented, bug free, and that does what you want done. Next, you select the computer, printer and other needed equipment from the group of machines on which the software will run. Key factors here are reliability and compatibility with other microcomputers in your facility. Lastly, you select a reliable vendor who will provide good, dependable service in a reasonable time. The ability to correctly select computer software and hardware is a key skill needed by medical record professionals today and in the future. Professionals can make quality computer decisions by selecting software and systems that are compatible with other computers in their facility, allow for future net-working, ease of use, and adaptability for expansion as new applications are identified. The key to success is to not only provide for your present needs, but to be prepared for future rapid expansion and change in your computer usage as technology and your skills grow.

  13. Supplier Selection Based on Intuitionistic Fuzzy Sets Group Decision Making

    Directory of Open Access Journals (Sweden)

    Rui Wang

    2013-01-01

    Full Text Available The selection of suppliers had always been a key point of the supply chain management, directly impact the operation of supply chain. In this context, firstly introduced the study situation of supplier selection, established the evaluation index system based on the research and then puts forward a new method for supplier selection based on intuitionistic fuzzy sets. Finally, using an example to illustrate the application of indicators and the method provides a new method for supplier selection.

  14. Selection of unidimensional scales from a multidimensional item bank in the polytomous Mokken IRT model

    NARCIS (Netherlands)

    Hemker, BT; Sijtsma, Klaas; Molenaar, Ivo W

    1995-01-01

    An automated item selection procedure for selecting unidimensional scales of polytomous items from multidimensional datasets is developed for use in the context of the Mokken item response theory model of monotone homogeneity (Mokken & Lewis, 1982). The selection procedure is directly based on the s

  15. Selecting, weeding, and weighting biased climate model ensembles

    Science.gov (United States)

    Jackson, C. S.; Picton, J.; Huerta, G.; Nosedal Sanchez, A.

    2012-12-01

    In the Bayesian formulation, the "log-likelihood" is a test statistic for selecting, weeding, or weighting climate model ensembles with observational data. This statistic has the potential to synthesize the physical and data constraints on quantities of interest. One of the thorny issues for formulating the log-likelihood is how one should account for biases. While in the past we have included a generic discrepancy term, not all biases affect predictions of quantities of interest. We make use of a 165-member ensemble CAM3.1/slab ocean climate models with different parameter settings to think through the issues that are involved with predicting each model's sensitivity to greenhouse gas forcing given what can be observed from the base state. In particular we use multivariate empirical orthogonal functions to decompose the differences that exist among this ensemble to discover what fields and regions matter to the model's sensitivity. We find that the differences that matter are a small fraction of the total discrepancy. Moreover, weighting members of the ensemble using this knowledge does a relatively poor job of adjusting the ensemble mean toward the known answer. This points out the shortcomings of using weights to correct for biases in climate model ensembles created by a selection process that does not emphasize the priorities of your log-likelihood.

  16. A MODEL SELECTION PROCEDURE IN MIXTURE-PROCESS EXPERIMENTS FOR INDUSTRIAL PROCESS OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Márcio Nascimento de Souza Leão

    2015-08-01

    Full Text Available We present a model selection procedure for use in Mixture and Mixture-Process Experiments. Certain combinations of restrictions on the proportions of the mixture components can result in a very constrained experimental region. This results in collinearity among the covariates of the model, which can make it difficult to fit the model using the traditional method based on the significance of the coefficients. For this reason, a model selection methodology based on information criteria will be proposed for process optimization. Two examples are presented to illustrate this model selection procedure.

  17. Risk-based audit selection of dairy farms.

    Science.gov (United States)

    van Asseldonk, M A P M; Velthuis, A G J

    2014-02-01

    Dairy farms are audited in the Netherlands on numerous process standards. Each farm is audited once every 2 years. Increasing demands for cost-effectiveness in farm audits can be met by introducing risk-based principles. This implies targeting subpopulations with a higher risk of poor process standards. To select farms for an audit that present higher risks, a statistical analysis was conducted to test the relationship between the outcome of farm audits and bulk milk laboratory results before the audit. The analysis comprised 28,358 farm audits and all conducted laboratory tests of bulk milk samples 12 mo before the audit. The overall outcome of each farm audit was classified as approved or rejected. Laboratory results included somatic cell count (SCC), total bacterial count (TBC), antimicrobial drug residues (ADR), level of butyric acid spores (BAB), freezing point depression (FPD), level of free fatty acids (FFA), and cleanliness of the milk (CLN). The bulk milk laboratory results were significantly related to audit outcomes. Rejected audits are likely to occur on dairy farms with higher mean levels of SCC, TBC, ADR, and BAB. Moreover, in a multivariable model, maxima for TBC, SCC, and FPD as well as standard deviations for TBC and FPD are risk factors for negative audit outcomes. The efficiency curve of a risk-based selection approach, on the basis of the derived regression results, dominated the current random selection approach. To capture 25, 50, or 75% of the population with poor process standards (i.e., audit outcome of rejected), respectively, only 8, 20, or 47% of the population had to be sampled based on a risk-based selection approach. Milk quality information can thus be used to preselect high-risk farms to be audited more frequently.

  18. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  19. Selecting global climate models for regional climate change studies.

    Science.gov (United States)

    Pierce, David W; Barnett, Tim P; Santer, Benjamin D; Gleckler, Peter J

    2009-05-26

    Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simulated regional climate. Accordingly, 42 performance metrics based on seasonal temperature and precipitation, the El Nino/Southern Oscillation (ENSO), and the Pacific Decadal Oscillation are constructed and applied to 21 global models. However, no strong relationship is found between the score of the models on the metrics and results of the D&A analysis. Instead, the importance of having ensembles of runs with enough realizations to reduce the effects of natural internal climate variability is emphasized. Also, the superiority of the multimodel ensemble average (MM) to any 1 individual model, already found in global studies examining the mean climate, is true in this regional study that includes measures of variability as well. Evidence is shown that this superiority is largely caused by the cancellation of offsetting errors in the individual global models. Results with both the MM and models picked randomly confirm the original D&A results of anthropogenically forced JFM temperature changes in the western U.S. Future projections of temperature do not depend on model performance until the 2080s, after which the better performing models show warmer temperatures.

  20. Selecting global climate models for regional climate change studies

    Science.gov (United States)

    Pierce, David W.; Barnett, Tim P.; Santer, Benjamin D.; Gleckler, Peter J.

    2009-01-01

    Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simulated regional climate. Accordingly, 42 performance metrics based on seasonal temperature and precipitation, the El Nino/Southern Oscillation (ENSO), and the Pacific Decadal Oscillation are constructed and applied to 21 global models. However, no strong relationship is found between the score of the models on the metrics and results of the D&A analysis. Instead, the importance of having ensembles of runs with enough realizations to reduce the effects of natural internal climate variability is emphasized. Also, the superiority of the multimodel ensemble average (MM) to any 1 individual model, already found in global studies examining the mean climate, is true in this regional study that includes measures of variability as well. Evidence is shown that this superiority is largely caused by the cancellation of offsetting errors in the individual global models. Results with both the MM and models picked randomly confirm the original D&A results of anthropogenically forced JFM temperature changes in the western U.S. Future projections of temperature do not depend on model performance until the 2080s, after which the better performing models show warmer temperatures. PMID:19439652

  1. Using PSO-Based Hierarchical Feature Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhiwei Ji

    2014-01-01

    Full Text Available Hepatocellular carcinoma (HCC is one of the most common malignant tumors. Clinical symptoms attributable to HCC are usually absent, thus often miss the best therapeutic opportunities. Traditional Chinese Medicine (TCM plays an active role in diagnosis and treatment of HCC. In this paper, we proposed a particle swarm optimization-based hierarchical feature selection (PSOHFS model to infer potential syndromes for diagnosis of HCC. Firstly, the hierarchical feature representation is developed by a three-layer tree. The clinical symptoms and positive score of patient are leaf nodes and root in the tree, respectively, while each syndrome feature on the middle layer is extracted from a group of symptoms. Secondly, an improved PSO-based algorithm is applied in a new reduced feature space to search an optimal syndrome subset. Based on the result of feature selection, the causal relationships of symptoms and syndromes are inferred via Bayesian networks. In our experiment, 147 symptoms were aggregated into 27 groups and 27 syndrome features were extracted. The proposed approach discovered 24 syndromes which obviously improved the diagnosis accuracy. Finally, the Bayesian approach was applied to represent the causal relationships both at symptom and syndrome levels. The results show that our computational model can facilitate the clinical diagnosis of HCC.

  2. Investigation of PDE5/PDE6 and PDE5/PDE11 selective potent tadalafil-like PDE5 inhibitors using combination of molecular modeling approaches, molecular fingerprint-based virtual screening protocols and structure-based pharmacophore development.

    Science.gov (United States)

    Kayık, Gülru; Tüzün, Nurcan Ş; Durdagi, Serdar

    2017-12-01

    The essential biological function of phosphodiesterase (PDE) type enzymes is to regulate the cytoplasmic levels of intracellular second messengers, 3',5'-cyclic guanosine monophosphate (cGMP) and/or 3',5'-cyclic adenosine monophosphate (cAMP). PDE targets have 11 isoenzymes. Of these enzymes, PDE5 has attracted a special attention over the years after its recognition as being the target enzyme in treating erectile dysfunction. Due to the amino acid sequence and the secondary structural similarity of PDE6 and PDE11 with the catalytic domain of PDE5, first-generation PDE5 inhibitors (i.e. sildenafil and vardenafil) are also competitive inhibitors of PDE6 and PDE11. Since the major challenge of designing novel PDE5 inhibitors is to decrease their cross-reactivity with PDE6 and PDE11, in this study, we attempt to identify potent tadalafil-like PDE5 inhibitors that have PDE5/PDE6 and PDE5/PDE11 selectivity. For this aim, the similarity-based virtual screening protocol is applied for the "clean drug-like subset of ZINC database" that contains more than 20 million small compounds. Moreover, molecular dynamics (MD) simulations of selected hits complexed with PDE5 and off-targets were performed in order to get insights for structural and dynamical behaviors of the selected molecules as selective PDE5 inhibitors. Since tadalafil blocks hERG1 K channels in concentration dependent manner, the cardiotoxicity prediction of the hit molecules was also tested. Results of this study can be useful for designing of novel, safe and selective PDE5 inhibitors.

  3. Finite element model selection using Particle Swarm Optimization

    CERN Document Server

    Mthembu, Linda; Friswell, Michael I; Adhikari, Sondipon

    2009-01-01

    This paper proposes the application of particle swarm optimization (PSO) to the problem of finite element model (FEM) selection. This problem arises when a choice of the best model for a system has to be made from set of competing models, each developed a priori from engineering judgment. PSO is a population-based stochastic search algorithm inspired by the behaviour of biological entities in nature when they are foraging for resources. Each potentially correct model is represented as a particle that exhibits both individualistic and group behaviour. Each particle moves within the model search space looking for the best solution by updating the parameters values that define it. The most important step in the particle swarm algorithm is the method of representing models which should take into account the number, location and variables of parameters to be updated. One example structural system is used to show the applicability of PSO in finding an optimal FEM. An optimal model is defined as the model that has t...

  4. 基于多目标粒子群的土地整理项目选址模型%Site selection model of land consolidation projects based on multi-objective optimization PSO

    Institute of Scientific and Technical Information of China (English)

    王华; 朱付保

    2015-01-01

    were the maximum potential of newly-increased cultivated land, the higher connectivity among zones and the best land suitability. Two types of constraint conditions that included the minimum newly-increased cultivated land ratio and the area limitation of land consolidation project were also considered. Site selection model of land consolidation projects based on the multi-objective particle swarm optimization algorithm was proposed to solve the multi-objective spatial optimization problem with the assist of GIS (geographic information system). The mapping relationships between different concepts in the site selection of land consolidation projects and intelligence algorithms were analyzed. Each vector parcel was considered to be a decision-making unit, and the value of decision variable was suggested to be 1 when the corresponding parcel was chosen to the project areas, otherwise it was zero. Each particle in the particle swarm optimization (PSO) algorithm represented a site selection scheme of land consolidation project, which comprised all the multi-dimensional decision-making units. The particle was encoded based on the identification of the parcels and the values of decision variable. The structures of velocity calculation operator and position updating operator were designed based on the spatial coding scheme of individual. The mechanism of particle status update and the procedure of evolution should be improved because of the discrete solution space of the model of the site selection of land consolidation projects. At last, take Jiayu county, Hubei province as a case study, different weight schemes were chosen. Jiayu county is an important base for producing the agricultural by-products and aquatic products, which made it our choice of study area to test the model. The model was expected to reasonably select spatial units in accordance with multiple objectives and constraints, and to optimize the newly-increased cultivated land ratio and the spatial pattern

  5. Acridine-intercalator based hypoxia selective cytotoxins

    Science.gov (United States)

    Papadopoulou-Rosenzweig, Maria; Bloomer, William D.; Bloomer, William D.

    1994-01-01

    Hypoxia selective cytotoxins of the general formula ##STR1## wherein n is from 1 to 5, and NO.sub.2 is in at least one of the 2, 4 or 5-positions of the imidazole. Such compounds have utility as radiosensitizers and chemosensitizers.

  6. Acridine-intercalator based hypoxia selective cytotoxins

    Energy Technology Data Exchange (ETDEWEB)

    Papadopoulou-Rosenzweig, M.; Bloomer, W.D.

    1994-03-15

    Hypoxia selective cytotoxins of the general formula STR1 wherein n is from 1 to 5, and NO[sub 2] is in at least one of the 2, 4 or 5-positions of the imidazole are developed. Such compounds have utility as radiosensitizers and chemosensitizers. 9 figs.

  7. Antibiotic Selection Pressure Determination through Sequence-Based Metagenomics.

    Science.gov (United States)

    Willmann, Matthias; El-Hadidi, Mohamed; Huson, Daniel H; Schütz, Monika; Weidenmaier, Christopher; Autenrieth, Ingo B; Peter, Silke

    2015-12-01

    The human gut forms a dynamic reservoir of antibiotic resistance genes (ARGs). Treatment with antimicrobial agents has a significant impact on the intestinal resistome and leads to enhanced horizontal transfer and selection of resistance. We have monitored the development of intestinal ARGs over a 6-day course of ciprofloxacin (Cp) treatment in two healthy individuals by using sequenced-based metagenomics and different ARG quantification methods. Fixed- and random-effect models were applied to determine the change in ARG abundance per defined daily dose of Cp as an expression of the respective selection pressure. Among various shifts in the composition of the intestinal resistome, we found in one individual a strong positive selection for class D beta-lactamases which were partly located on a mobile genetic element. Furthermore, a trend to a negative selection has been observed with class A beta-lactamases (-2.66 hits per million sample reads/defined daily dose; P = 0.06). By 4 weeks after the end of treatment, the composition of ARGs returned toward their initial state but to a different degree in both subjects. We present here a novel analysis algorithm for the determination of antibiotic selection pressure which can be applied in clinical settings to compare therapeutic regimens regarding their effect on the intestinal resistome. This information is of critical importance for clinicians to choose antimicrobial agents with a low selective force on their patients' intestinal ARGs, likely resulting in a diminished spread of resistance and a reduced burden of hospital-acquired infections with multidrug-resistant pathogens.

  8. A Portfolio Selection Model Based on the Mean Semi-absolute Value Deviation%基于均值半绝对价值离差的资产组合选择模型

    Institute of Scientific and Technical Information of China (English)

    高锦; 印凡成; 黄健元

    2012-01-01

    从投资者心理感受出发,考虑损失规避现象,应用展望理论的价值函数思想,引进均衡因子.运用隶属函数刻画收益和风险的价值满意度.建立了均值半绝对价值离差的资产组合选择模型.从“上证30指数”中选择20种股票,通过实证分析,与半绝对离差模型做了比较.结果显示此模型可行,且更能体现投资者对损失的心理感受.%From incsforis psychological feelings,considenzg the loss averison phenonenon,Applying the thought of value function of prospect theory, the tradeoff factor is introduced, used the membership function to describe the value satisfaction of the benefits and risks, and set up a portfolio selection model based on the mean semi - absolute value deviation. Selecting 20 as the choice securities from the Shanghai Stock 30 Index, through empirical analysis, compared to semi -absolute deviation model, it is show that this model is feasible, and can reflect investors' psychological feeling of loss well

  9. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  10. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  11. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Fernando A. Auat Cheein

    2013-01-01

    Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.

  12. Applying a Hybrid MCDM Model for Six Sigma Project Selection

    Directory of Open Access Journals (Sweden)

    Fu-Kwun Wang

    2014-01-01

    Full Text Available Six Sigma is a project-driven methodology; the projects that provide the maximum financial benefits and other impacts to the organization must be prioritized. Project selection (PS is a type of multiple criteria decision making (MCDM problem. In this study, we present a hybrid MCDM model combining the decision-making trial and evaluation laboratory (DEMATEL technique, analytic network process (ANP, and the VIKOR method to evaluate and improve Six Sigma projects for reducing performance gaps in each criterion and dimension. We consider the film printing industry of Taiwan as an empirical case. The results show that our study not only can use the best project selection, but can also be used to analyze the gaps between existing performance values and aspiration levels for improving the gaps in each dimension and criterion based on the influential network relation map.

  13. Research of Project Portfolio Selection Model and Optimization Considering Interdependencies based on Preferences Incorporation of Decision Maker%决策者偏好交互项目组合选择模型及算法优化研究

    Institute of Scientific and Technical Information of China (English)

    罗淑娟; 白思俊; 郭云涛

    2016-01-01

    Project Portfolio Selection is known as the essential element of strategic project management and deci⁃sion. There are also some disadvantages in model and methods which are based on the project portfolio selection considering interdependencies and preference incorporation of decision maker. The paper proposes an novel outran⁃king model to classify the preference relationship between different projects, and it also bring the synergies and in⁃terdependencies into consideration which makes the model more complete. Moreover, it proposes the improved par⁃ticle swarm optimization algorithm based on the model which speeds up convergence an expand the diversity of non⁃dominant solutions simultaneously. In the restrained condition of preferences incorporation and interdependencies, the paper comes up with the experiments to verify the model and method respectively. The results indicate that the non⁃dominant solutions achieved by the outranking model are likely to the optimum of project portfolio selection, and the improved particle swarm optimization search the results faster.%项目组合选择是战略项目管理决策的重要环节,目前基于决策者偏好的交互项目组合选择的研究仍然在模型和算法上存在不足。首先提出级别优先模型细致划分了项目间的偏好关系,并引入了项目间的协同交互,使模型更加完备。进而结合该模型改进了多目标粒子群算法,加快其收敛速度,并拓展其非劣解的多样性。在考虑决策者偏好和项目间交互约束的条件下,分别对偏好模型和模型求解算法进行了仿真验证。仿真结果表明,采用级别优先模型所得的非劣解更加接近项目组合选择的最优解,改进粒子群算法的搜索速度更快。

  14. Automation of Endmember Pixel Selection in SEBAL/METRIC Model

    Science.gov (United States)

    Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.

    2015-12-01

    The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.

  15. Road Network Selection Based on Road Hierarchical Structure Control

    Directory of Open Access Journals (Sweden)

    HE Haiwei

    2015-04-01

    Full Text Available A new road network selection method based on hierarchical structure is studied. Firstly, road network is built as strokes which are then classified into hierarchical collections according to the criteria of betweenness centrality value (BC value. Secondly, the hierarchical structure of the strokes is enhanced using structural characteristic identification technique. Thirdly, the importance calculation model was established according to the relationships among the hierarchical structure of the strokes. Finally, the importance values of strokes are got supported with the model's hierarchical calculation, and with which the road network is selected. Tests are done to verify the advantage of this method by comparing it with other common stroke-oriented methods using three kinds of typical road network data. Comparision of the results show that this method had few need to semantic data, and could eliminate the negative influence of edge strokes caused by the criteria of BC value well. So, it is better to maintain the global hierarchical structure of road network, and suitable to meet with the selection of various kinds of road network at the same time.

  16. Unifying models for X-ray selected and Radio selected BL Lac Objects

    CERN Document Server

    Fossati, G; Ghisellini, G; Maraschi, L; Brera-Merate, O A

    1997-01-01

    We discuss alternative interpretations of the differences in the Spectral Energy Distributions (SEDs) of BL Lacs found in complete Radio or X-ray surveys. A large body of observations in different bands suggests that the SEDs of BL Lac objects appearing in X-ray surveys differ from those appearing in radio surveys mainly in having a (synchrotron) spectral cut-off (or break) at much higher frequency. In order to explain the different properties of radio and X-ray selected BL Lacs Giommi and Padovani proposed a model based on a common radio luminosity function. At each radio luminosity, objects with high frequency spectral cut-offs are assumed to be a minority. Nevertheless they dominate the X-ray selected population due to the larger X-ray-to-radio-flux ratio. An alternative model explored here (reminiscent of the orientation models previously proposed) is that the X-ray luminosity function is "primary" and that at each X-ray luminosity a minority of objects has larger radio-to-X-ray flux ratio. The prediction...

  17. An Opportunistic Relaying Selection Scheme Based on Relay Fairness

    Directory of Open Access Journals (Sweden)

    Ting An

    2013-07-01

    Full Text Available Opportunistic relaying scheme is a single cooperative relay selection method based on Channel State Information. However, the failure probability of the best relay selection may become unacceptable when the number of relays increases. Although most of the existing solutions can reduce the failure probability of relay selection, they ignore the fairness of the relay selection. In order to improve the fairness of the relay selection without affecting the failure probability of the best relay selection, we propose a modified practical best relay selection scheme in this paper,  and by introducing proportional fair algorithm, the relay timer will be given a smaller correlation coefficient. The proposed method makes the probability of selection be improved. In the light of defined fairness factor, the relay fairness is represented. Simulation results show that the new algorithm can improve the fairness of the relay selection on the basis of maintaining the original failure probability of relay selection.

  18. Cardinality constrained portfolio selection via factor models

    OpenAIRE

    Monge, Juan Francisco

    2017-01-01

    In this paper we propose and discuss different 0-1 linear models in order to solve the cardinality constrained portfolio problem by using factor models. Factor models are used to build portfolios to track indexes, together with other objectives, also need a smaller number of parameters to estimate than the classical Markowitz model. The addition of the cardinality constraints limits the number of securities in the portfolio. Restricting the number of securities in the portfolio allows us to o...

  19. Fuzzy sets, rough sets, and modeling evidence: Theory and Application. A Dempster-Shafer based approach to compromise decision making with multiattributes applied to product selection

    Science.gov (United States)

    Dekorvin, Andre

    1992-01-01

    The Dempster-Shafer theory of evidence is applied to a multiattribute decision making problem whereby the decision maker (DM) must compromise with available alternatives, none of which exactly satisfies his ideal. The decision mechanism is constrained by the uncertainty inherent in the determination of the relative importance of each attribute element and the classification of existing alternatives. The classification of alternatives is addressed through expert evaluation of the degree to which each element is contained in each available alternative. The relative importance of each attribute element is determined through pairwise comparisons of the elements by the decision maker and implementation of a ratio scale quantification method. Then the 'belief' and 'plausibility' that an alternative will satisfy the decision maker's ideal are calculated and combined to rank order the available alternatives. Application to the problem of selecting computer software is given.

  20. Evidence accumulation as a model for lexical selection

    NARCIS (Netherlands)

    Anders, R.; Riès, S.; van Maanen, L.; Alario, F.-X.

    2015-01-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of

  1. MCA Based Performance Evaluation of Project Selection

    CERN Document Server

    Bakshi, Tuli

    2011-01-01

    Multi-criteria decision support systems are used in various fields of human activities. In every alternative multi-criteria decision making problem can be represented by a set of properties or constraints. The properties can be qualitative & quantitative. For measurement of these properties, there are different unit, as well as there are different optimization techniques. Depending upon the desired goal, the normalization aims for obtaining reference scales of values of these properties. This paper deals with a new additive ratio assessment method. In order to make the appropriate decision and to make a proper comparison among the available alternatives Analytic Hierarchy Process (AHP) and ARAS have been used. The uses of AHP is for analysis the structure of the project selection problem and to assign the weights of the properties and the ARAS method is used to obtain the final ranking and select the best one among the projects. To illustrate the above mention methods survey data on the expansion of optic...

  2. A CLIPS-based expert system for the evaluation and selection of robots

    Science.gov (United States)

    Nour, Mohamed A.; Offodile, Felix O.; Madey, Gregory R.

    1994-01-01

    This paper describes the development of a prototype expert system for intelligent selection of robots for manufacturing operations. The paper first develops a comprehensive, three-stage process to model the robot selection problem. The decisions involved in this model easily lend themselves to an expert system application. A rule-based system, based on the selection model, is developed using the CLIPS expert system shell. Data about actual robots is used to test the performance of the prototype system. Further extensions to the rule-based system for data handling and interfacing capabilities are suggested.

  3. The application of foreign exchange portfolio selection based on GARCH model%基于GARCH模型的外汇投资组合应用

    Institute of Scientific and Technical Information of China (English)

    雷震; 韦增欣

    2014-01-01

    This article uses time series model to analyze theforeign exchange rate of seven main cur-rencies, which contain EUR/USD、GBP/USD、USD/CHF、USD/JPY、AUD/USD、USD/CAD、NZD/USD . The GARCH model was used to forecast the rate of every currency, using the rate can calcu-late the profit expectation and usingresidual variance of every model to measure the risk. When mak-ing investment strategy, whether the leverage exists or not should be considered, then the “Mean-variance” model of Markowitz’s was used to get strategy for next day.%对于外汇市场主要7个货币对欧元/美元,英镑/美元,美元/瑞士法郎,美元/日元,澳元/美元,美元/加元,纽元/美元的汇率运用时间序列分析方法,分别运用GARCH模型对汇率进行建模预测,计算得到收益期望,以GARCH模型残差方差作为风险度量。对投资过程中有无杠杆,运用马可维茨的“均值-方差”模型进行投资决策,得到下一日的投资策略。

  4. Monte Carlo Method Based QSAR Modeling of Coumarin Derivates as Potent HIV‐1 Integrase Inhibitors and Molecular Docking Studies of Selected 4‐phenyl Hydroxycoumarins

    Directory of Open Access Journals (Sweden)

    Veselinović Jovana

    2014-06-01

    Full Text Available In search for new and promising coumarin compounds as HIV-1 integrase inhibitors, chemoinformatic methods like quantitative structure-activity relationships (QSAR modeling and molecular docking have an important role since they can predict desired activity and propose molecule binding to enzyme.

  5. Crowdsourcing Based 3d Modeling

    Science.gov (United States)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  6. RECEIVE ANTENNA SUBSET SELECTION BASED ON ORTHOGONAL COMPONENTS

    Institute of Scientific and Technical Information of China (English)

    Lan Peng; Liu Ju; Gu Bo; Zhang Wei

    2007-01-01

    A new receive antenna subset selection algorithm with low complexity for wireless Multiple-Input Multiple-Output (MIMO) systems is proposed, which is based on the orthogonal components of the channel matrix. Larger capacity is achieved compared with the existing antenna selection methods. Simulation results of quasi-static flat fading channel demonstrate the significant performance of the proposed selection algorithm.

  7. Prediction of Farmers’ Income and Selection of Model ARIMA

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Based on the research technology of scholars’ prediction of farmers’ income and the data of per capita annual net income in rural households in Henan Statistical Yearbook from 1979 to 2009,it is found that time series of farmers’ income is in accordance with I(2)non-stationary process.The order-determination and identification of the model are achieved by adopting the correlogram-based analytical method of Box-Jenkins.On the basis of comparing a group of model properties with different parameters,model ARIMA(4,2,2)is built up.The testing result shows that the residual error of the selected model is white noise and accords with the normal distribution,which can be used to predict farmers’ income.The model prediction indicates that income in rural households will continue to increase from 2009 to 2012 and will reach the value of 2 282.4,2 502.9,2 686.9 and 2 884.5 respectively.The growth speed will go down from fast to slow with weak sustainability.

  8. Dynamic modelling and analysis of biochemical networks: mechanism-based models and model-based experiments.

    Science.gov (United States)

    van Riel, Natal A W

    2006-12-01

    Systems biology applies quantitative, mechanistic modelling to study genetic networks, signal transduction pathways and metabolic networks. Mathematical models of biochemical networks can look very different. An important reason is that the purpose and application of a model are essential for the selection of the best mathematical framework. Fundamental aspects of selecting an appropriate modelling framework and a strategy for model building are discussed. Concepts and methods from system and control theory provide a sound basis for the further development of improved and dedicated computational tools for systems biology. Identification of the network components and rate constants that are most critical to the output behaviour of the system is one of the major problems raised in systems biology. Current approaches and methods of parameter sensitivity analysis and parameter estimation are reviewed. It is shown how these methods can be applied in the design of model-based experiments which iteratively yield models that are decreasingly wrong and increasingly gain predictive power.

  9. Antimutagenic and antioxidant activity of a selected lectin-free common bean (Phaseolus vulgaris L.) in two cell-based models.

    Science.gov (United States)

    Frassinetti, Stefania; Gabriele, Morena; Caltavuturo, Leonardo; Longo, Vincenzo; Pucci, Laura

    2015-03-01

    Legumes and particularly beans are a key food of Mediterranean diet representing an important source of proteins, fiber, some minerals and vitamins and bioactive compounds. We evaluated the antioxidant and anti-mutagenic effects of a new fermented powder of a selected lectin-free and phaseolamin-enriched variety of common bean (Phaseolus vulgaris L.), named Lady Joy. Lady Joy lysate (Lys LJ) was studied in human erythrocytes and in Saccharomyces cerevisiae yeast cells. The antioxidant and anti-hemolytic properties of Lys LJ, studied in an ex vivo erythrocytes system using the cellular antioxidant assay (CAA-RBC) and the hemolysis test, evidenced a dose-dependent antioxidant activity as well as a significant hemolysis inhibition. Besides, results evidenced that Lys LJ treatment significantly decreased the intracellular ROS concentration and mutagenesis induced by hydrogen peroxide in S. cerevisiae D7 strain. In conclusion, Lys LJ showed both an antimutagenic effect in yeast and a strong scavenging activity in yeast and human cells.

  10. Worm Propagation Model Based on Selective-Random Scan%基于选择性随机扫描的蠕虫传播模型

    Institute of Scientific and Technical Information of China (English)

    张祥德; 丁春燕; 朱和贵

    2006-01-01

    分析了蠕虫病毒在一个封闭的计算机群中传播的过程,提出了一个离散的蠕虫传播模型,并且把该模型和Code Red v2蠕虫的真实传播数据进行了比较,通过比较发现该模型较好地反映了随机扫描蠕虫的传播规律.进一步把该模型做了推广,考虑选择性随机扫描(Selective-random scan)蠕虫的传播规律,通过推广后的模型可以发现,在一个封闭的计算机群中,各个小网络中的易感主机数变化越大,蠕虫传播的速度越快.

  11. Selection of Temporal Lags When Modeling Economic and Financial Processes.

    Science.gov (United States)

    Matilla-Garcia, Mariano; Ojeda, Rina B; Marin, Manuel Ruiz

    2016-10-01

    This paper suggests new nonparametric statistical tools and procedures for modeling linear and nonlinear univariate economic and financial processes. In particular, the tools presented help in selecting relevant lags in the model description of a general linear or nonlinear time series; that is, nonlinear models are not a restriction. The tests seem to be robust to the selection of free parameters. We also show that the test can be used as a diagnostic tool for well-defined models.

  12. Mathematical Model for the Selection of Processing Parameters in Selective Laser Sintering of Polymer Products

    Directory of Open Access Journals (Sweden)

    Ana Pilipović

    2014-03-01

    Full Text Available Additive manufacturing (AM is increasingly applied in the development projects from the initial idea to the finished product. The reasons are multiple, but what should be emphasised is the possibility of relatively rapid manufacturing of the products of complicated geometry based on the computer 3D model of the product. There are numerous limitations primarily in the number of available materials and their properties, which may be quite different from the properties of the material of the finished product. Therefore, it is necessary to know the properties of the product materials. In AM procedures the mechanical properties of materials are affected by the manufacturing procedure and the production parameters. During SLS procedures it is possible to adjust various manufacturing parameters which are used to influence the improvement of various mechanical and other properties of the products. The paper sets a new mathematical model to determine the influence of individual manufacturing parameters on the polymer product made by selective laser sintering. Old mathematical model is checked by statistical method with central composite plan and it is established that old mathematical model must be expanded with new parameter beam overlay ratio. Verification of new mathematical model and optimization of the processing parameters are made on SLS machine.

  13. 基于胜任特征的高校教师选拔测评体系设计%Design on Competency Model Based Selection and Evaluation System for University Teachers

    Institute of Scientific and Technical Information of China (English)

    徐智华

    2014-01-01

    高校教师胜任特征模型由必备知识族、技能能力族、人格特质族3大族共26个要素组成。本着胜任特征导向、综合全面、重点突出、经济节约的原则,设计的高校人力资源招聘与选拔测评体系包含知识测试、行为性面试、工作样本、无领导小组讨论、大五人格测验、职业偏好量表等一系列测评方法。%Based on existing researches on university teachers'competency models ,extensive survey of university teachers and students ,as well as analysis of university teachers'positions ,this paper firstly builds the competency model of university teachers ,consisting of 26 specific elements ascribed to three di-mensions respectively .Under the guidance of the principles of orientation toward competency ,comprehen-siveness ,emphasis on key points and economy ,it designs competency model-based selection and evalua-tion system for university teachers ,comprising a few evaluating methods of knowledge test ,behavioral in-terview ,work sample ,leaderless group discussion ,big five personality test and vocational preference in-ventory ,w hich to be hoped to contribute to university human resource recruiting and selecting practices .

  14. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...

  15. Development of Solar Drying Model for Selected Cambodian Fish Species

    Directory of Open Access Journals (Sweden)

    Anna Hubackova

    2014-01-01

    Full Text Available A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6°C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg·h−1. Based on coefficient of determination (R2, chi-square (χ2 test, and root-mean-square error (RMSE, the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing.

  16. Development of solar drying model for selected Cambodian fish species.

    Science.gov (United States)

    Hubackova, Anna; Kucerova, Iva; Chrun, Rithy; Chaloupkova, Petra; Banout, Jan

    2014-01-01

    A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6 °C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg · h(-1). Based on coefficient of determination (R(2)), chi-square (χ(2)) test, and root-mean-square error (RMSE), the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing.

  17. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  18. Selection Strategies for Social Influence in the Threshold Model

    Science.gov (United States)

    Karampourniotis, Panagiotis; Szymanski, Boleslaw; Korniss, Gyorgy

    The ubiquity of online social networks makes the study of social influence extremely significant for its applications to marketing, politics and security. Maximizing the spread of influence by strategically selecting nodes as initiators of a new opinion or trend is a challenging problem. We study the performance of various strategies for selection of large fractions of initiators on a classical social influence model, the Threshold model (TM). Under the TM, a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. The strategies we study are of two kinds: strategies based solely on the initial network structure (Degree-rank, Dominating Sets, PageRank etc.) and strategies that take into account the change of the states of the nodes during the evolution of the cascade, e.g. the greedy algorithm. We find that the performance of these strategies depends largely on both the network structure properties, e.g. the assortativity, and the distribution of the thresholds assigned to the nodes. We conclude that the optimal strategy needs to combine the network specifics and the model specific parameters to identify the most influential spreaders. Supported in part by ARL NS-CTA, ARO, and ONR.

  19. Selection of models to calculate the LLW source term

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, T.M. (Brookhaven National Lab., Upton, NY (United States))

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab.

  20. Continuum model for chiral induced spin selectivity in helical molecules

    Energy Technology Data Exchange (ETDEWEB)

    Medina, Ernesto [Centro de Física, Instituto Venezolano de Investigaciones Científicas, 21827, Caracas 1020 A (Venezuela, Bolivarian Republic of); Groupe de Physique Statistique, Institut Jean Lamour, Université de Lorraine, 54506 Vandoeuvre-les-Nancy Cedex (France); Department of Chemistry and Biochemistry, Arizona State University, Tempe, Arizona 85287 (United States); González-Arraga, Luis A. [IMDEA Nanoscience, Cantoblanco, 28049 Madrid (Spain); Finkelstein-Shapiro, Daniel; Mujica, Vladimiro [Department of Chemistry and Biochemistry, Arizona State University, Tempe, Arizona 85287 (United States); Berche, Bertrand [Centro de Física, Instituto Venezolano de Investigaciones Científicas, 21827, Caracas 1020 A (Venezuela, Bolivarian Republic of); Groupe de Physique Statistique, Institut Jean Lamour, Université de Lorraine, 54506 Vandoeuvre-les-Nancy Cedex (France)

    2015-05-21

    A minimal model is exactly solved for electron spin transport on a helix. Electron transport is assumed to be supported by well oriented p{sub z} type orbitals on base molecules forming a staircase of definite chirality. In a tight binding interpretation, the spin-orbit coupling (SOC) opens up an effective π{sub z} − π{sub z} coupling via interbase p{sub x,y} − p{sub z} hopping, introducing spin coupled transport. The resulting continuum model spectrum shows two Kramers doublet transport channels with a gap proportional to the SOC. Each doubly degenerate channel satisfies time reversal symmetry; nevertheless, a bias chooses a transport direction and thus selects for spin orientation. The model predicts (i) which spin orientation is selected depending on chirality and bias, (ii) changes in spin preference as a function of input Fermi level and (iii) back-scattering suppression protected by the SO gap. We compute the spin current with a definite helicity and find it to be proportional to the torsion of the chiral structure and the non-adiabatic Aharonov-Anandan phase. To describe room temperature transport, we assume that the total transmission is the result of a product of coherent steps.

  1. A Successive Selection Method for finite element model updating

    Science.gov (United States)

    Gou, Baiyong; Zhang, Weijie; Lu, Qiuhai; Wang, Bo

    2016-03-01

    Finite Element (FE) model can be updated effectively and efficiently by using the Response Surface Method (RSM). However, it often involves performance trade-offs such as high computational cost for better accuracy or loss of efficiency for lots of design parameter updates. This paper proposes a Successive Selection Method (SSM), which is based on the linear Response Surface (RS) function and orthogonal design. SSM rewrites the linear RS function into a number of linear equations to adjust the Design of Experiment (DOE) after every FE calculation. SSM aims to interpret the implicit information provided by the FE analysis, to locate the Design of Experiment (DOE) points more quickly and accurately, and thereby to alleviate the computational burden. This paper introduces the SSM and its application, describes the solution steps of point selection for DOE in detail, and analyzes SSM's high efficiency and accuracy in the FE model updating. A numerical example of a simply supported beam and a practical example of a vehicle brake disc show that the SSM can provide higher speed and precision in FE model updating for engineering problems than traditional RSM.

  2. Model and Variable Selection Procedures for Semiparametric Time Series Regression

    Directory of Open Access Journals (Sweden)

    Risa Kato

    2009-01-01

    Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.

  3. Availability-based Importance Framework for Supplier Selection

    Science.gov (United States)

    2015-05-01

    AVAILABILITY-BASED IMPORTANCE FRAMEWORK FOR SUPPLIER SELECTION Acquisition Research Symposium May 13-14, 2015 Kash Barker Industrial and...DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Availability-based Importance Framework for Supplier Selection 5a. CONTRACT NUMBER...means to incorporate an ability to meet system availability needs into the supplier selection process  Addressing “how do we build in system

  4. development development of base transceiver station selection ...

    African Journals Online (AJOL)

    eobe

    Placement of base transceiver station (BTSs) by different operators on a particular site as ... Although the International Commission on Non-Ionizing Radiation Protection (ICNIRP) viewed that ..... vacant space on the AIRTEL mast which could.

  5. 基于SWOT分析的网络招聘盈利模式选择%Selection of Profit Model of Online Recruitment Based on SWOT Analysis

    Institute of Scientific and Technical Information of China (English)

    刘曦

    2012-01-01

    In the new era, the competition among enterprises is actually a competition for talent. This paper made the SWOT analysis on traditional online recruitment firstly, identified its development constraints, and then analyzed profit model of new online recruitment, combining with development status of online recruitment, finally, summed up the paper.%新时代企业间的竞争实际上就是对人才的竞争.本文首先通过对传统网络招聘进行SWOT分析,找出其发展的制约点.再结合网络招聘的发展现状,分析新兴网络招聘的盈利模式.最后总结全文.

  6. Using multilevel models to quantify heterogeneity in resource selection

    Science.gov (United States)

    Wagner, T.; Diefenbach, D.R.; Christensen, S.A.; Norton, A.S.

    2011-01-01

    Models of resource selection are being used increasingly to predict or model the effects of management actions rather than simply quantifying habitat selection. Multilevel, or hierarchical, models are an increasingly popular method to analyze animal resource selection because they impose a relatively weak stochastic constraint to model heterogeneity in habitat use and also account for unequal sample sizes among individuals. However, few studies have used multilevel models to model coefficients as a function of predictors that may influence habitat use at different scales or quantify differences in resource selection among groups. We used an example with white-tailed deer (Odocoileus virginianus) to illustrate how to model resource use as a function of distance to road that varies among deer by road density at the home range scale. We found that deer avoidance of roads decreased as road density increased. Also, we used multilevel models with sika deer (Cervus nippon) and white-tailed deer to examine whether resource selection differed between species. We failed to detect differences in resource use between these two species and showed how information-theoretic and graphical measures can be used to assess how resource use may have differed. Multilevel models can improve our understanding of how resource selection varies among individuals and provides an objective, quantifiable approach to assess differences or changes in resource selection. ?? The Wildlife Society, 2011.

  7. Python Program to Select HII Region Models

    Science.gov (United States)

    Miller, Clare; Lamarche, Cody; Vishwas, Amit; Stacey, Gordon J.

    2016-01-01

    HII regions are areas of singly ionized Hydrogen formed by the ionizing radiaiton of upper main sequence stars. The infrared fine-structure line emissions, particularly Oxygen, Nitrogen, and Neon, can give important information about HII regions including gas temperature and density, elemental abundances, and the effective temperature of the stars that form them. The processes involved in calculating this information from observational data are complex. Models, such as those provided in Rubin 1984 and those produced by Cloudy (Ferland et al, 2013) enable one to extract physical parameters from observational data. However, the multitude of search parameters can make sifting through models tedious. I digitized Rubin's models and wrote a Python program that is able to take observed line ratios and their uncertainties and find the Rubin or Cloudy model that best matches the observational data. By creating a Python script that is user friendly and able to quickly sort through models with a high level of accuracy, this work increases efficiency and reduces human error in matching HII region models to observational data.

  8. PaaS Compositing Mode Selection of Cloud Computing Based on Model Simulation%基于模型仿真的云计算PaaS构建模式选择

    Institute of Scientific and Technical Information of China (English)

    徐星; 周剑雄; 王明哲

    2014-01-01

    Platform as a Service (PaaS ) is hotspots of enterprise cloud platform product selection and cloud technology research .This paper discusses PaaS compositing mode on the microscopic and macroscopic perspectives and proposes a cross-level method to select PaaS compositing mode by the compositing of PaaS platform based on SOA .Innovatively ,feasible service compositions can be find by Petri net structural analysis and PaaS environment of all kinds of architecture model are simulated by CloudSim .We use Petri net and CloudSim interactive simulation to measure the technology properties of service compositions for PaaS compositing mode and apply Analytic Network Process (ANP) to evaluate comprehensive performances of PaaS compositing models .Consequently ,this method can provides the guidance for PaaS platform selection and construction cloud service platform .%平台即服务(PaaS)是企业云平台产品选择和云技术研发的热点。通过基于SOA的PaaS平台构建,从微观与宏观上探讨PaaS构建模式,提出了一个跨层面的PaaS构建模式选择方法。将Petri网结构化分析用于寻找可行服务组合,利用CloudSim模拟各种架构模式的PaaS环境,通过应用Petri网与CloudSim的交互仿真实现PaaS构建模式的服务组合技术性能评估,应用网络层次分析法(ANP)评价PaaS构建模式的综合性能,以指导企业的PaaS产品选择和云服务平台构建。

  9. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  10. Geostatistical interpolation model selection based on ArcGIS and spatio-temporal variability analysis of groundwater level in piedmont plains, northwest China.

    Science.gov (United States)

    Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong

    2016-01-01

    Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.

  11. Structure-based design of a potent and selective small peptide inhibitor of Mycobacterium tuberculosis 6-hydroxymethyl-7, 8-dihydropteroate synthase: a computer modelling approach.

    Science.gov (United States)

    Rao, Gita Subba; Kumar, Manoj

    2008-06-01

    In an attempt to design novel anti-TB drugs, the target chosen is the enzyme 6-hydroxymethyl-7,8-dihydropteroate synthase (DHPS), which is an attractive target since it is present in microorganisms but not in humans. The existing drugs for this target are the sulfa drugs, which have been used for about seven decades. However, single mutations in the DHPS gene can cause resistance to sulfa drugs. Therefore, there is a need for the design of novel drugs. Based on the recently determined crystal structure of Mycobacterium tuberculosis (M.tb) DHPS complexed with a known substrate analogue, and on the crystal structures of E. coli DHPS and Staphylococcus aureus DHPS, we have identified a dipeptide inhibitor with the sequence WK. Docking calculations indicate that this peptide has a significantly higher potency than the sulfa drugs. In addition, the potency is 70-90 times higher for M.tb DHPS as compared to that for the pterin and folate-binding sites of key human proteins. Thus, the designed inhibitor is a promising lead compound for the development of novel antimycobcaterial agents.

  12. Tungsten based catalysts for selective deoxygenation

    NARCIS (Netherlands)

    Gosselink, R.W.; Stellwagen, D.R.; Bitter, J.H.

    2013-01-01

    Over the past decades, impending oil shortages combined with petroleum market instability have prompted a search for a new source of both transportation fuels and bulk chemicals. Renewable bio-based feedstocks such as sugars, grains, and seeds are assumed to be capable of contributing to a significa

  13. Quantile hydrologic model selection and model structure deficiency assessment: 2. Applications

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    Quantile hydrologic model selection and structure deficiency assessment is applied in three case studies. The performance of quantile model selection problem is rigorously evaluated using a model structure on the French Broad river basin data set. The case study shows that quantile model selection

  14. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence.

    Science.gov (United States)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  15. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    Science.gov (United States)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  16. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  17. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  18. Adapting AIC to conditional model selection

    NARCIS (Netherlands)

    M. van Ommen (Matthijs)

    2012-01-01

    textabstractIn statistical settings such as regression and time series, we can condition on observed information when predicting the data of interest. For example, a regression model explains the dependent variables $y_1, \\ldots, y_n$ in terms of the independent variables $x_1, \\ldots, x_n$.

  19. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn;

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

  20. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide

  1. ARTIFICIAL NEURAL NETWORKS BASED GEARS MATERIAL SELECTION HYBRID INTELLIGENT SYSTEM

    Institute of Scientific and Technical Information of China (English)

    X.C. Li; W.X. Zhu; G. Chen; D.S. Mei; J. Zhang; K.M. Chen

    2003-01-01

    An artificial neural networks(ANNs) based gear material selection hybrid intelligent system is established by analyzing the individual advantages and weakness of expert system (ES) and ANNs and the applications in material select of them. The system mainly consists of tow parts: ES and ANNs. By being trained with much data samples,the back propagation (BP) ANN gets the knowledge of gear materials selection, and is able to inference according to user input. The system realizes the complementing of ANNs and ES. Using this system, engineers without materials selection experience can conveniently deal with gear materials selection.

  2. Model Based Definition

    Science.gov (United States)

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  3. Clonal Selection Based Memetic Algorithm for Job Shop Scheduling Problems

    Institute of Scientific and Technical Information of China (English)

    Jin-hui Yang; Liang Sun; Heow Pueh Lee; Yun Qian; Yan-chun Liang

    2008-01-01

    A clonal selection based memetic algorithm is proposed for solving job shop scheduling problems in this paper. In the proposed algorithm, the clonal selection and the local search mechanism are designed to enhance exploration and exploitation. In the clonal selection mechanism, clonal selection, hypermutation and receptor edit theories are presented to construct an evolutionary searching mechanism which is used for exploration. In the local search mechanism, a simulated annealing local search algorithm based on Nowicki and Smutnicki's neighborhood is presented to exploit local optima. The proposed algorithm is examined using some well-known benchmark problems. Numerical results validate the effectiveness of the proposed algorithm.

  4. Dissimilarity-Based Sparse Subset Selection.

    Science.gov (United States)

    Elhamifar, Ehsan; Sapiro, Guillermo; Sastry, S Shankar

    2016-11-01

    Finding an informative subset of a large collection of data points or models is at the center of many problems in computer vision, recommender systems, bio/health informatics as well as image and natural language processing. Given pairwise dissimilarities between the elements of a 'source set' and a 'target set,' we consider the problem of finding a subset of the source set, called representatives or exemplars, that can efficiently describe the target set. We formulate the problem as a row-sparsity regularized trace minimization problem. Since the proposed formulation is, in general, NP-hard, we consider a convex relaxation. The solution of our optimization finds representatives and the assignment of each element of the target set to each representative, hence, obtaining a clustering. We analyze the solution of our proposed optimization as a function of the regularization parameter. We show that when the two sets jointly partition into multiple groups, our algorithm finds representatives from all groups and reveals clustering of the sets. In addition, we show that the proposed framework can effectively deal with outliers. Our algorithm works with arbitrary dissimilarities, which can be asymmetric or violate the triangle inequality. To efficiently implement our algorithm, we consider an Alternating Direction Method of Multipliers (ADMM) framework, which results in quadratic complexity in the problem size. We show that the ADMM implementation allows to parallelize the algorithm, hence further reducing the computational time. Finally, by experiments on real-world datasets, we show that our proposed algorithm improves the state of the art on the two problems of scene categorization using representative images and time-series modeling and segmentation using representative models.

  5. Model selection in systems biology depends on experimental design.

    Science.gov (United States)

    Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H

    2014-06-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.

  6. Partner Selection Analysis and System Development Based on Gray Relation Analysis for an Agile Virtual Enterprise

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    This paper analyzes the state of the art of partner selection and enumerates the advantage of partner selection based on gray relation analysis comparing to the other algorithms of the partner selection. Furthermore, partner selection system based on gray relation for an Agile Virtual Enterprise(AVE) is analyzed and designed based on the definition and characteristics of the AVE. According to J2EE mode, the architecture of the partner selection system is put forward and the system is developed using JSP, EJB and SQL Server. The paper lays emphasis on a gray relational mathematic model, AVE evaluation infrastructure, a core algorithm of partner selection and a multi-layer gray relation selection process.

  7. Modeling HIV-1 drug resistance as episodic directional selection.

    Directory of Open Access Journals (Sweden)

    Ben Murrell

    Full Text Available The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  8. Developing a conceptual model for selecting and evaluating online markets

    Directory of Open Access Journals (Sweden)

    Sadegh Feizollahi

    2013-04-01

    Full Text Available There are many evidences, which emphasis on the benefits of using new technologies of information and communication in international business and many believe that E-Commerce can help satisfy customer explicit and implicit requirements. Internet shopping is a concept developed after the introduction of electronic commerce. Information technology (IT and its applications, specifically in the realm of the internet and e-mail promoted the development of e-commerce in terms of advertising, motivating and information. However, with the development of new technologies, credit and financial exchange on the internet websites were constructed so to facilitate e-commerce. The proposed study sends a total of 200 questionnaires to the target group (teachers - students - professionals - managers of commercial web sites and it manages to collect 130 questionnaires for final evaluation. Cronbach's alpha test is used for measuring reliability and to evaluate the validity of measurement instruments (questionnaires, and to assure construct validity, confirmatory factor analysis is employed. In addition, in order to analyze the research questions based on the path analysis method and to determine markets selection models, a regular technique is implemented. In the present study, after examining different aspects of e-commerce, we provide a conceptual model for selecting and evaluating online marketing in Iran. These findings provide a consistent, targeted and holistic framework for the development of the Internet market in the country.

  9. A Network Analysis Model for Selecting Sustainable Technology

    Directory of Open Access Journals (Sweden)

    Sangsung Park

    2015-09-01

    Full Text Available Most companies develop technologies to improve their competitiveness in the marketplace. Typically, they then patent these technologies around the world in order to protect their intellectual property. Other companies may use patented technologies to develop new products, but must pay royalties to the patent holders or owners. Should they fail to do so, this can result in legal disputes in the form of patent infringement actions between companies. To avoid such situations, companies attempt to research and develop necessary technologies before their competitors do so. An important part of this process is analyzing existing patent documents in order to identify emerging technologies. In such analyses, extracting sustainable technology from patent data is important, because sustainable technology drives technological competition among companies and, thus, the development of new technologies. In addition, selecting sustainable technologies makes it possible to plan their R&D (research and development efficiently. In this study, we propose a network model that can be used to select the sustainable technology from patent documents, based on the centrality and degree of a social network analysis. To verify the performance of the proposed model, we carry out a case study using actual patent data from patent databases.

  10. Improving Cluster Analysis with Automatic Variable Selection Based on Trees

    Science.gov (United States)

    2014-12-01

    ANALYSIS WITH AUTOMATIC VARIABLE SELECTION BASED ON TREES by Anton D. Orr December 2014 Thesis Advisor: Samuel E. Buttrey Second Reader...DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE IMPROVING CLUSTER ANALYSIS WITH AUTOMATIC VARIABLE SELECTION BASED ON TREES 5. FUNDING NUMBERS 6...2006 based on classification and regression trees to address problems with determining dissimilarity. Current algorithms do not simultaneously address

  11. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  12. Asset pricing model selection: Indonesian Stock Exchange

    OpenAIRE

    Pasaribu, Rowland Bismark Fernando

    2010-01-01

    The Capital Asset Pricing Model (CAPM) has dominated finance theory for over thirty years; it suggests that the market beta alone is sufficient to explain stock returns. However evidence shows that the cross-section of stock returns cannot be described solely by the one-factor CAPM. Therefore, the idea is to add other factors in order to complete the beta in explaining the price movements in the stock exchange. The Arbitrage Pricing Theory (APT) has been proposed as the first multifactor succ...

  13. A mixed model reduction method for preserving selected physical information

    Science.gov (United States)

    Zhang, Jing; Zheng, Gangtie

    2017-03-01

    A new model reduction method in the frequency domain is presented. By mixedly using the model reduction techniques from both the time domain and the frequency domain, the dynamic model is condensed to selected physical coordinates, and the contribution of slave degrees of freedom is taken as a modification to the model in the form of effective modal mass of virtually constrained modes. The reduced model can preserve the physical information related to the selected physical coordinates such as physical parameters and physical space positions of corresponding structure components. For the cases of non-classical damping, the method is extended to the model reduction in the state space but still only contains the selected physical coordinates. Numerical results are presented to validate the method and show the effectiveness of the model reduction.

  14. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions,in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform l1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  15. Test-Based Admission to Selective Universities:

    DEFF Research Database (Denmark)

    Thomsen, Jens-Peter

    2016-01-01

    not favour first-generation students; further, the system serves as an access route for low-achieving children from the privileged professional classes. Drawing mainly on theories in the social closure tradition, I argue that children with highly educated parents will be favoured when qualitative merits......This article examines whether the existence of a secondary higher education admission system honouring more qualitative and extra-curricular merits has reduced the social class gap in access to highly sought-after university programmes in Denmark. I use administrative data to examine differences...... in the social gradient in the primary admission system, admitting students on the basis of their high school grade point average, and in the secondary admission system, admitting university students based on more qualitative assessments. I find that the secondary higher education admission system does...

  16. Information Gain Based Dimensionality Selection for Classifying Text Documents

    Energy Technology Data Exchange (ETDEWEB)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexity is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.

  17. Sensitivity of resource selection and connectivity models to landscape definition

    Science.gov (United States)

    Katherine A. Zeller; Kevin McGarigal; Samuel A. Cushman; Paul Beier; T. Winston Vickers; Walter M. Boyce

    2017-01-01

    Context: The definition of the geospatial landscape is the underlying basis for species-habitat models, yet sensitivity of habitat use inference, predicted probability surfaces, and connectivity models to landscape definition has received little attention. Objectives: We evaluated the sensitivity of resource selection and connectivity models to four landscape...

  18. A Rule-Based Industrial Boiler Selection System

    Science.gov (United States)

    Tan, C. F.; Khalil, S. N.; Karjanto, J.; Tee, B. T.; Wahidin, L. S.; Chen, W.; Rauterberg, G. W. M.; Sivarao, S.; Lim, T. L.

    2015-09-01

    Boiler is a device used for generating the steam for power generation, process use or heating, and hot water for heating purposes. Steam boiler consists of the containing vessel and convection heating surfaces only, whereas a steam generator covers the whole unit, encompassing water wall tubes, super heaters, air heaters and economizers. The selection of the boiler is very important to the industry for conducting the operation system successfully. The selection criteria are based on rule based expert system and multi-criteria weighted average method. The developed system consists of Knowledge Acquisition Module, Boiler Selection Module, User Interface Module and Help Module. The system capable of selecting the suitable boiler based on criteria weighted. The main benefits from using the system is to reduce the complexity in the decision making for selecting the most appropriate boiler to palm oil process plant.

  19. Comparison of potentials between genotype-based selection and genotypic value-based selection of quantitative traits

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    According to the difference of selection criteria, methods of marker-assisted selection (MAS) of quantitative traits can be divided into genotype-based selection (GS) and genotypic value-based selection (GVS). By means of computer simulation, potentials of the two methods were compared. Results showed that the two methods had similar basic laws and their efficiencies were not significantly different except that GS behaved better in the case where the number of QTLs was large and QTL effects were equal. From the application point of view, combination of GS and GVS should be the development direction of MAS research in the future.

  20. A Working Model of Natural Selection Illustrated by Table Tennis

    Science.gov (United States)

    Dinc, Muhittin; Kilic, Selda; Aladag, Caner

    2013-01-01

    Natural selection is one of the most important topics in biology and it helps to clarify the variety and complexity of organisms. However, students in almost every stage of education find it difficult to understand the mechanism of natural selection and they can develop misconceptions about it. This article provides an active model of natural…

  1. Elementary Teachers' Selection and Use of Visual Models

    Science.gov (United States)

    Lee, Tammy D.; Gail Jones, M.

    2017-07-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  2. Using nonlinear models in fMRI data analysis: model selection and activation detection.

    Science.gov (United States)

    Deneux, Thomas; Faugeras, Olivier

    2006-10-01

    There is an increasing interest in using physiologically plausible models in fMRI analysis. These models do raise new mathematical problems in terms of parameter estimation and interpretation of the measured data. In this paper, we show how to use physiological models to map and analyze brain activity from fMRI data. We describe a maximum likelihood parameter estimation algorithm and a statistical test that allow the following two actions: selecting the most statistically significant hemodynamic model for the measured data and deriving activation maps based on such model. Furthermore, as parameter estimation may leave much incertitude on the exact values of parameters, model identifiability characterization is a particular focus of our work. We applied these methods to different variations of the Balloon Model (Buxton, R.B., Wang, E.C., and Frank, L.R. 1998. Dynamics of blood flow and oxygenation changes during brain activation: the balloon model. Magn. Reson. Med. 39: 855-864; Buxton, R.B., Uludağ, K., Dubowitz, D.J., and Liu, T.T. 2004. Modelling the hemodynamic response to brain activation. NeuroImage 23: 220-233; Friston, K. J., Mechelli, A., Turner, R., and Price, C. J. 2000. Nonlinear responses in fMRI: the balloon model, volterra kernels, and other hemodynamics. NeuroImage 12: 466-477) in a visual perception checkerboard experiment. Our model selection proved that hemodynamic models better explain the BOLD response than linear convolution, in particular because they are able to capture some features like poststimulus undershoot or nonlinear effects. On the other hand, nonlinear and linear models are comparable when signals get noisier, which explains that activation maps obtained in both frameworks are comparable. The tools we have developed prove that statistical inference methods used in the framework of the General Linear Model might be generalized to nonlinear models.

  3. Fluctuating selection models and McDonald-Kreitman type analyses.

    Directory of Open Access Journals (Sweden)

    Toni I Gossmann

    Full Text Available It is likely that the strength of selection acting upon a mutation varies through time due to changes in the environment. However, most population genetic theory assumes that the strength of selection remains constant. Here we investigate the consequences of fluctuating selection pressures on the quantification of adaptive evolution using McDonald-Kreitman (MK style approaches. In agreement with previous work, we show that fluctuating selection can generate evidence of adaptive evolution even when the expected strength of selection on a mutation is zero. However, we also find that the mutations, which contribute to both polymorphism and divergence tend, on average, to be positively selected during their lifetime, under fluctuating selection models. This is because mutations that fluctuate, by chance, to positive selected values, tend to reach higher frequencies in the population than those that fluctuate towards negative values. Hence the evidence of positive adaptive evolution detected under a fluctuating selection model by MK type approaches is genuine since fixed mutations tend to be advantageous on average during their lifetime. Never-the-less we show that methods tend to underestimate the rate of adaptive evolution when selection fluctuates.

  4. [Selection of a statistical model for the evaluation of the reliability of the results of toxicological analyses. II. Selection of our statistical model for the evaluation].

    Science.gov (United States)

    Antczak, K; Wilczyńska, U

    1980-01-01

    Part II presents a statistical model devised by the authors for evaluating toxicological analyses results. The model includes: 1. Establishment of a reference value, basing on our own measurements taken by two independent analytical methods. 2. Selection of laboratories -- basing on the deviation of the obtained values from reference ones. 3. On consideration of variance analysis, t-student's test and differences test, subsequent quality controls and particular laboratories have been evaluated.

  5. Robustness and epistasis in mutation-selection models

    Science.gov (United States)

    Wolff, Andrea; Krug, Joachim

    2009-09-01

    We investigate the fitness advantage associated with the robustness of a phenotype against deleterious mutations using deterministic mutation-selection models of a quasispecies type equipped with a mesa-shaped fitness landscape. We obtain analytic results for the robustness effect which become exact in the limit of infinite sequence length. Thereby, we are able to clarify a seeming contradiction between recent rigorous work and an earlier heuristic treatment based on mapping to a Schrödinger equation. We exploit the quantum mechanical analogy to calculate a correction term for finite sequence lengths and verify our analytic results by numerical studies. In addition, we investigate the occurrence of an error threshold for a general class of epistatic landscapes and show that diminishing epistasis is a necessary but not sufficient condition for error threshold behaviour.

  6. The Optimal Portfolio Selection Model under g -Expectation

    National Research Council Canada - National Science Library

    Li Li

    2014-01-01

      This paper solves the optimal portfolio selection model under the framework of the prospect theory proposed by Kahneman and Tversky in the 1970s with decision rule replaced by the g -expectation introduced by Peng...

  7. Robust Decision-making Applied to Model Selection

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Laboratory

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  8. A Trust Enhanced Web Services Selection Model Based on Social Network%基于社会网络分析的 Web服务信任增强选择模型

    Institute of Scientific and Technical Information of China (English)

    朱先远; 尤科本

    2014-01-01

    针对海量Web服务资源的可信判断和优质可信服务的选择问题,提出了一种基于社会网络的Web服务信任增强选择模型EMBST。该模型依据提供相同功能的不同 Web 服务具有不同的社会网络属性的思想,利用检索出的Web服务资源发现并拓展该资源潜在的社会网络;将信任等级指标添加到关系网络中实现社会网络中具有不同信任等级的Web资源的选择,增加了对节点可信性的考虑;最后在该模型的基础上,提出了基于信任度的谱分割算子。仿真结果表明该选择模型为Web资源可信选择问题提供了有效解决方案。%Aiming at how to select the best suitable service resources from the massive Web resources and making a judgment on trusted resources, a new trusted Web resource selection method is proposed based on social network.This method is based on different Web educational resources to provide the same functionality with the thought of a different social network properties .By re-trieving web resource, we discover and extend the potential resource of the social network , Indicators enhance the level of trust by making the automatic choice of Web resources with different level of trust in the social network and considering the credibility of the node, Finally, by analyzing the model proposed the spectrum segmentation algorithm based on trust.Simulation results show that this method provides an effective solution for Web resource selection .

  9. RESEARCH ON MOTIVATION MODEL BASED BEHAVIOUR SELECTION OF AUTONOMOUS VIRTUAL HUMAN%基于动机模型的自主性虚拟人行为选择研究

    Institute of Scientific and Technical Information of China (English)

    徐冰; 刘肖健

    2012-01-01

    Research on autonomous virtual human is a new field that integrates artificial life and computer animation together. But as a result, people' s psychological activity is a whole process, all of these parameters are interdependent and mutually influential, such as motivation, perception and so on, and current research is still local and limited. In this paper, a selection mechanism for autonomous behaviour of virtual human controlled by a simplified inhibition and fatigue model is proposed based on motivation model and use for reference on Maslow Theory. Experiments show that our method well solves the problem that how the virtual human makes arbitration and selection on behaviour among mutually inhibited behaviours in dynamic virtual environment with limited resources. Experiments also prove that this study can be applied to intelligent interaction and other fields.%自主性虚拟人的研究是人工生命和计算机动画交叉融合的新领域.但是由于人的心理活动是一个整体的过程,动机、感知等这些参数都是互有联系和影响的,目前的研究仍然只是局部和有限的.借鉴马斯洛理论,基于动机模型框架提出一种简化的抑制和疲劳模型控制的虚拟人自主行为选择机制,实验结果表明,该方法较好地解决了在资源有限的动态虚拟环境中,虚拟人如何在多个相互抑制的行为之间对行为进行仲裁和选择.经实验证明,该研究可以应用于智能交互领域.

  10. Soybean parent selection based on genetic diversity

    Directory of Open Access Journals (Sweden)

    Valéria Carpentieri-Pípolo

    2000-01-01

    Full Text Available Thirty-four soybean lines were assessed for twelve traits. The genetic distances were estimates using multivariate techniques, to identify parents to be included in breeding programs for hybridization. Grouping by the Tocher method, from generalized Mahalanobis distances, divided the 34 lines into four groups. The most important agronomic traits, weight of seeds per plot, plant height, height of first pod and days to maturity were considered when recommending for crossing. The following crosses were recommended based on the genetic divergence and the key agronomic traits: lines 23, 10, 2, 27 and 25 (group I with genotype 6 (group II and genotype 16 (group III. Thus only ten crosses would be made, representing only 2% of the total crosses which could be made in the partial diallel among the 34 lines assessed, which would allow up to 561 combinations.Trinta e quatro linhagens de soja foram avaliadas para doze características. As distâncias genéticas foram estimadas utilizando técnicas multivariadas com objetivo de identificar parentais a serem incluidos em um programa de melhoramento envolvendo hibridação. O agrupamento pelo método de Tocher, a partir das distâncias generalizadas de Mahalanobis, dividiu as 34 linhagens em 4 grupos. As caracterísiticas agronômicas mais importantes, peso de sementes por parcela, altura de planta, altura da primeira vagem e dias para maturação foram consideradas para a recomendação dos cruzamentos. Os seguintes cruzamentos foram recomendados baseado na divergência genética e nas características agronômicas chave: linhagens 23, 10, 2, 27 e 25 (grupo I com genótipo 6 (grupoII e com o genótipo 16 (grupo III. Portanto somente 10 cruzamentos poderiam ser realizados representando somente 2% do total de cruzamentos qu poderiam ser realizados em um dialelo parcial entre as 34 linhagens avaliadas as quais admitiriam até 561 combinações.

  11. SELECTION MOMENTS AND GENERALIZED METHOD OF MOMENTS FOR HETEROSKEDASTIC MODELS

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2016-06-01

    Full Text Available In this paper, the authors describe the selection methods for moments and the application of the generalized moments method for the heteroskedastic models. The utility of GMM estimators is found in the study of the financial market models. The selection criteria for moments are applied for the efficient estimation of GMM for univariate time series with martingale difference errors, similar to those studied so far by Kuersteiner.

  12. Modeling Suspicious Email Detection using Enhanced Feature Selection

    OpenAIRE

    2013-01-01

    The paper presents a suspicious email detection model which incorporates enhanced feature selection. In the paper we proposed the use of feature selection strategies along with classification technique for terrorists email detection. The presented model focuses on the evaluation of machine learning algorithms such as decision tree (ID3), logistic regression, Na\\"ive Bayes (NB), and Support Vector Machine (SVM) for detecting emails containing suspicious content. In the literature, various algo...

  13. RUC at TREC 2014: Select Resources Using Topic Models

    Science.gov (United States)

    2014-11-01

    them being observed (i.e. sampled). To infer the topic Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the...Selection. In CIKM 2009, pages 1277-1286. [10] M. Baillie, M. Carmen, and F. Crestani. A Multiple- Collection Latent Topic Model for Federated...RUC at TREC 2014: Select Resources Using Topic Models Qiuyue Wang, Shaochen Shi, Wei Cao School of Information Renmin University of China Beijing

  14. 利用遥感数据优化物候模型时样本选择的新方法%A new method of sample selections for optimizing phenology model based remote sensing data

    Institute of Scientific and Technical Information of China (English)

    马勇刚; 张弛; 陈曦

    2015-01-01

    植被物候模型是生态系统模型的重要组成部分,其精度对准确地模拟陆面和大气之间的能量和物质交换具有重要意义。利用遥感获取空间物候信息并与气候数据进行耦合分析是在中亚干旱区等地面物候观测数据缺乏的地区构建物候模型的重要方法。为减小混合植被像元和气候数据资料的内在误差及二者在空间尺度的不匹配对物候模型构建产生的影响,该研究提出一种在气象站点周围选取满足规定规则集的“代表植被类型像元”作为样本点的选择方法,以代表植被类型像元的遥感物候数据和气象站点数据为基础,结合经典物候模型和改进物候模型,在粒子群优化算法支持下,分别以独立的拟合与评价样本数据,完成了荒漠草原植被与落叶阔叶林的模型拟合与评价。研究发现中亚干旱区荒漠草原植被的最优模型为温度-降水修正模型,落叶阔叶林的最优模型为替代模型。通过此方法模型总体精度在8–10 d左右。结果表明此方法在气候数据和植物物候空间匹配方面有改进,有助于提高物候模型精度。%Aims Phenology model is considered as the most efficient tool to assess the phonological responses of plants to future climate change. Furthermore, as an important component in dynamic ecological models, the performance of phenology model is of significance for the precision in simulating mass and energy exchanges between land and atmosphere. Combining long time series remote sensing data and climate data to construct regional phenology model may be the only way to solve the problem of deficiency in lacking in-situ observational data on phenology and species-specific phenology models. The objective of this study was to develop a new method of sample selections for constructing phenology in the arid zone of Central Asia where only sparse observational data are available. Methods Based the

  15. Semiconducting Metal Oxide Based Sensors for Selective Gas Pollutant Detection

    Directory of Open Access Journals (Sweden)

    Marsha C. Kanan

    2009-10-01

    Full Text Available A review of some papers published in the last fifty years that focus on the semiconducting metal oxide (SMO based sensors for the selective and sensitive detection of various environmental pollutants is presented.

  16. DNA regulatory motif selection based on support vector machine ...

    African Journals Online (AJOL)

    DNA regulatory motif selection based on support vector machine (SVM) and its application in microarray ... African Journal of Biotechnology ... experiments to explore the underlying relationships between motif types and gene functions.

  17. Model selection and model averaging in phylogenetics: advantages of akaike information criterion and bayesian approaches over likelihood ratio tests.

    Science.gov (United States)

    Posada, David; Buckley, Thomas R

    2004-10-01

    Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).

  18. Selection Criteria in Regime Switching Conditional Volatility Models

    Directory of Open Access Journals (Sweden)

    Thomas Chuffart

    2015-05-01

    Full Text Available A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.

  19. Optimal Route Selection Method Based on Vague Sets

    Institute of Scientific and Technical Information of China (English)

    GUO Rui; DU Li min; WANG Chun

    2015-01-01

    Optimal route selection is an important function of vehicle trac flow guidance system. Its core is to determine the index weight for measuring the route merits and to determine the evaluation method for selecting route. In this paper, subjective weighting method which relies on driver preference is used to determine the weight and the paper proposes the multi-criteria weighted decision method based on vague sets for selecting the optimal route. Examples show that, the usage of vague sets to describe route index value can provide more decision-making information for route selection.

  20. A guide to Bayesian model selection for ecologists

    Science.gov (United States)

    Hooten, Mevin B.; Hobbs, N.T.

    2015-01-01

    The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.

  1. A comparison of statistical selection strategies for univariate and bivariate log-linear models.

    Science.gov (United States)

    Moses, Tim; Holland, Paul W

    2010-11-01

    In this study, eight statistical selection strategies were evaluated for selecting the parameterizations of log-linear models used to model the distributions of psychometric tests. The selection strategies included significance tests based on four chi-squared statistics (likelihood ratio, Pearson, Freeman-Tukey, and Cressie-Read) and four additional strategies (Akaike information criterion (AIC), Bayesian information criterion (BIC), consistent Akaike information criterion (CAIC), and a measure attributed to Goodman). The strategies were evaluated in simulations for different log-linear models of univariate and bivariate test-score distributions and two sample sizes. Results showed that all eight selection strategies were most accurate for the largest sample size considered. For univariate distributions, the AIC selection strategy was especially accurate for selecting the correct parameterization of a complex log-linear model and the likelihood ratio chi-squared selection strategy was the most accurate strategy for selecting the correct parameterization of a relatively simple log-linear model. For bivariate distributions, the likelihood ratio chi-squared, Freeman-Tukey chi-squared, BIC, and CAIC selection strategies had similarly high selection accuracies.

  2. The Use of Evolution in a Central Action Selection Model

    Directory of Open Access Journals (Sweden)

    F. Montes-Gonzalez

    2007-01-01

    Full Text Available The use of effective central selection provides flexibility in design by offering modularity and extensibility. In earlier papers we have focused on the development of a simple centralized selection mechanism. Our current goal is to integrate evolutionary methods in the design of non-sequential behaviours and the tuning of specific parameters of the selection model. The foraging behaviour of an animal robot (animat has been modelled in order to integrate the sensory information from the robot to perform selection that is nearly optimized by the use of genetic algorithms. In this paper we present how selection through optimization finally arranges the pattern of presented behaviours for the foraging task. Hence, the execution of specific parts in a behavioural pattern may be ruled out by the tuning of these parameters. Furthermore, the intensive use of colour segmentation from a colour camera for locating a cylinder sets a burden on the calculations carried out by the genetic algorithm.

  3. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2013-01-01

    Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.

  4. Probabilistic Model-Based Background Subtraction

    DEFF Research Database (Denmark)

    Krüger, Volker; Andersen, Jakob; Prehn, Thomas

    2005-01-01

    manner. Bayesian propagation over time is used for proper model selection and tracking during model-based background subtraction. Bayes propagation is attractive in our application as it allows to deal with uncertainties during tracking. We have tested our approach on suitable outdoor video data....... is the correlation between pixels. In this paper we introduce a model-based background subtraction approach which facilitates prior knowledge of pixel correlations for clearer and better results. Model knowledge is being learned from good training video data, the data is stored for fast access in a hierarchical...

  5. Hyperopt: a Python library for model selection and hyperparameter optimization

    Science.gov (United States)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  6. CBFS: high performance feature selection algorithm based on feature clearness.

    Directory of Open Access Journals (Sweden)

    Minseok Seo

    Full Text Available BACKGROUND: The goal of feature selection is to select useful features and simultaneously exclude garbage features from a given dataset for classification purposes. This is expected to bring reduction of processing time and improvement of classification accuracy. METHODOLOGY: In this study, we devised a new feature selection algorithm (CBFS based on clearness of features. Feature clearness expresses separability among classes in a feature. Highly clear features contribute towards obtaining high classification accuracy. CScore is a measure to score clearness of each feature and is based on clustered samples to centroid of classes in a feature. We also suggest combining CBFS and other algorithms to improve classification accuracy. CONCLUSIONS/SIGNIFICANCE: From the experiment we confirm that CBFS is more excellent than up-to-date feature selection algorithms including FeaLect. CBFS can be applied to microarray gene selection, text categorization, and image classification.

  7. A Method of Determining Selectivity Coefficients Based on the Practical Slope of Ion Selective Electrodes

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    It is a problem to be solved that the experimental selectivity coefficients of ion selective electrodes (ISEs) depend on the activity.This paper studied the new method of determining selectivity coefficients.A mixed ion response equation,which was similar to Nicolsky-Eisenman (N-E) equation recommended by IUPAC,was proposed.The equation includes the practical response slope of ISEs to the primary ion and the interfering ion.The selectivity coefficient was defined by the equation instead of the N-E equation.The experimental part of the method is similar to that based on the N-E equation.The values of selectivity coefficients obtained with this method do not depend on the activity whether the electrodes exhibit the Nernst response or non-Nernst response.The feasibility of the new method is illustrated experimentally.

  8. A Model for Selection of Eyespots on Butterfly Wings.

    Directory of Open Access Journals (Sweden)

    Toshio Sekimura

    Full Text Available The development of eyespots on the wing surface of butterflies of the family Nympalidae is one of the most studied examples of biological pattern formation.However, little is known about the mechanism that determines the number and precise locations of eyespots on the wing. Eyespots develop around signaling centers, called foci, that are located equidistant from wing veins along the midline of a wing cell (an area bounded by veins. A fundamental question that remains unsolved is, why a certain wing cell develops an eyespot, while other wing cells do not.We illustrate that the key to understanding focus point selection may be in the venation system of the wing disc. Our main hypothesis is that changes in morphogen concentration along the proximal boundary veins of wing cells govern focus point selection. Based on previous studies, we focus on a spatially two-dimensional reaction-diffusion system model posed in the interior of each wing cell that describes the formation of focus points. Using finite element based numerical simulations, we demonstrate that variation in the proximal boundary condition is sufficient to robustly select whether an eyespot focus point forms in otherwise identical wing cells. We also illustrate that this behavior is robust to small perturbations in the parameters and geometry and moderate levels of noise. Hence, we suggest that an anterior-posterior pattern of morphogen concentration along the proximal vein may be the main determinant of the distribution of focus points on the wing surface. In order to complete our model, we propose a two stage reaction-diffusion system model, in which an one-dimensional surface reaction-diffusion system, posed on the proximal vein, generates the morphogen concentrations that act as non-homogeneous Dirichlet (i.e., fixed boundary conditions for the two-dimensional reaction-diffusion model posed in the wing cells. The two-stage model appears capable of generating focus point distributions

  9. Selecting the Model of Teaching Methods Based on The Application Type of Tax Mining Association Rules%基于关联规则挖掘的应用型税法教学方法选择模型

    Institute of Scientific and Technical Information of China (English)

    刘纯林; 孙睿潇

    2016-01-01

    针对目前税法教学方法无法达到实践应用技术型人才的培养目标,并且独立院校对应用型税法教学方法的选择上也无法满足应用技术型人才培养的需求,本文提出了一种基于模糊集优化关联规则挖掘的应用型税法教学方法选择模型,它是建立在关联规则挖掘算法的原则之上,运用模糊集提升了它准确性,再将专题法、案例法、讲授法、归纳比较法、“讲、读、练”法分别对五个不同的班级进行应用型税法教学,最后采用基于模糊集优化关联规则挖掘的应用型税法教学方法选择模型对其进行分析,并将得到的关联规则的强弱替代教学方法的优劣性。算法仿真结果证明了本文提出的优化模型比原算法更加准确。%Based on the fact that current teaching methods of revenue can not achieve the training objectives of cultivating practical and technical personnel, and independent institutions to choose the tax applied on teaching methods can not meet the needs of training practical and technical personnel, this paper presents a new teaching method applied tax rules mining selection model based on a fuzzy set optimization association. It is built on association rule mining algorithm, and its accuracy is improved based on fuzzy sets. What is more, teaching law, special law, case law, comparative law induction,"speaking, reading practicing"law have been conducted on tax applied teaching in five different classes. Finally, choose the model to analyze them using teaching methods of applied tax based on related optimization rule mining of fuzzy sets, and replace pros and cons of teaching method with resulting substitute teaching association rules. Algorithm simulation results show that the improved model have more accuracy compared to the original one.

  10. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    Most studies using Mare’s (1980, 1981) seminal model of educational transitions find that the effect of family background decreases across transitions. Recently, Cameron and Heckman (1998, 2001) have argued that the “waning coefficients” in the Mare model are driven by selection on unobserved...... the United States, United Kingdom, Denmark, and the Netherlands shows that when we take selection into account the effect of family background variables on educational transitions is largely constant across transitions. We also discuss several difficulties in estimating educational transition models which...... variables. This paper, first, explains theoretically how selection on unobserved variables leads to waning coefficients and, second, illustrates empirically how selection leads to biased estimates of the effect of family background on educational transitions. Our empirical analysis using data from...

  11. Agent-based modeling of sustainable behaviors

    CERN Document Server

    Sánchez-Maroño, Noelia; Fontenla-Romero, Oscar; Polhill, J; Craig, Tony; Bajo, Javier; Corchado, Juan

    2017-01-01

    Using the O.D.D. (Overview, Design concepts, Detail) protocol, this title explores the role of agent-based modeling in predicting the feasibility of various approaches to sustainability. The chapters incorporated in this volume consist of real case studies to illustrate the utility of agent-based modeling and complexity theory in discovering a path to more efficient and sustainable lifestyles. The topics covered within include: households' attitudes toward recycling, designing decision trees for representing sustainable behaviors, negotiation-based parking allocation, auction-based traffic signal control, and others. This selection of papers will be of interest to social scientists who wish to learn more about agent-based modeling as well as experts in the field of agent-based modeling.

  12. A Robust Service Selection Method Based on Uncertain QoS

    Directory of Open Access Journals (Sweden)

    Yanping Chen

    2016-01-01

    Full Text Available Nowadays, the number of Web services on the Internet is quickly increasing. Meanwhile, different service providers offer numerous services with the similar functions. Quality of Service (QoS has become an important factor used to select the most appropriate service for users. The most prominent QoS-based service selection models only take the certain attributes into account, which is an ideal assumption. In the real world, there are a large number of uncertain factors. In particular, at the runtime, QoS may become very poor or unacceptable. In order to solve the problem, a global service selection model based on uncertain QoS was proposed, including the corresponding normalization and aggregation functions, and then a robust optimization model adopted to transform the model. Experiment results show that the proposed method can effectively select services with high robustness and optimality.

  13. Multicriteria decision group model for the selection of suppliers

    Directory of Open Access Journals (Sweden)

    Luciana Hazin Alencar

    2008-08-01

    Full Text Available Several authors have been studying group decision making over the years, which indicates how relevant it is. This paper presents a multicriteria group decision model based on ELECTRE IV and VIP Analysis methods, to those cases where there is great divergence among the decision makers. This model includes two stages. In the first, the ELECTRE IV method is applied and a collective criteria ranking is obtained. In the second, using criteria ranking, VIP Analysis is applied and the alternatives are selected. To illustrate the model, a numerical application in the context of the selection of suppliers in project management is used. The suppliers that form part of the project team have a crucial role in project management. They are involved in a network of connected activities that can jeopardize the success of the project, if they are not undertaken in an appropriate way. The question tackled is how to select service suppliers for a project on behalf of an enterprise that assists the multiple objectives of the decision-makers.Vários autores têm estudado decisão em grupo nos últimos anos, o que indica a relevância do assunto. Esse artigo apresenta um modelo multicritério de decisão em grupo baseado nos métodos ELECTRE IV e VIP Analysis, adequado aos casos em que se tem uma grande divergência entre os decisores. Esse modelo é composto por dois estágios. No primeiro, o método ELECTRE IV é aplicado e uma ordenação dos critérios é obtida. No próximo estágio, com a ordenação dos critérios, o método VIP Analysis é aplicado e as alternativas são selecionadas. Para ilustrar o modelo, uma aplicação numérica no contexto da seleção de fornecedores em projetos é realizada. Os fornecedores que fazem parte da equipe do projeto têm um papel fundamental no gerenciamento de projetos. Eles estão envolvidos em uma rede de atividades conectadas que, caso não sejam executadas de forma apropriada, podem colocar em risco o sucesso do

  14. Unraveling the sub-processes of selective attention: insights from dynamic modeling and continuous behavior.

    Science.gov (United States)

    Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan

    2015-11-01

    Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition.

  15. Active defense strategy selection based on non-zero-sum attack-defense game model%基于非零和攻防博弈模型的主动防御策略选取方法

    Institute of Scientific and Technical Information of China (English)

    陈永强; 付钰; 吴晓平

    2013-01-01

    针对现实网络攻防环境中防御措施的滞后性以及攻防对抗过程中双方收益不完全相等的问题,提出一种基于非零和博弈的主动防御策略选取方法.首先依据攻击者与系统的博弈关系,结合网络安全问题实际情况提出网络安全博弈图;其次在此基础上给出一种基于非零和博弈的网络攻防博弈模型,结合主机重要度以及防御措施成功率计算单一安全属性攻防收益值,进而根据攻防意图对整体攻防收益进行量化;最后通过分析纳什均衡得到最优主动防御策略.实例验证了该方法在攻击行为预测和主动防御策略选取方面的有效性和可行性.%In order to deal with the problems that defensive measures are lagging behind the attack and that the payoffs of attacker and defender are unequal, an active strategy selection method based on non-zero-sum game was proposed. Firstly, a network security game graph was presented combined with the actual situation of network security and the relationship between the attacker and the defender. Secondly, a network attack-defense game model was proposed based on non-zero-sum game. The attack-defense cost of single security attribute was calculated combined with the host important degree and success rate of defense measures, and according to attack-defense intention, the total attack-defense cost was quantified. Finally, the best strategy for defender was obtained by analyzing the Nash equilibrium of the game model. A representative example was given to illustrate the efficacy and feasibility of the method on attack prediction and active defense strategy selection.

  16. On the benefits of location-based relay selection in mobile wireless networks

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Madsen, Tatiana Kozlova; Schwefel, Hans-Peter

    2016-01-01

    We consider infrastructure-based mobile networks that are assisted by a single relay transmission where both the downstream destination and relay nodes are mobile. Selecting the optimal transmission path for a destination node requires up-to-date link quality estimates of all relevant links...... with varying information update interval, node mobility, location inaccuracy, and inaccurate propagation model parameters. Our results show that location-based relay selection performs better than SNR-based relay selection at typical levels of location error when medium-scale fading can be neglected...

  17. Periodic Integration: Further Results on Model Selection and Forecasting

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1996-01-01

    textabstractThis paper considers model selection and forecasting issues in two closely related models for nonstationary periodic autoregressive time series [PAR]. Periodically integrated seasonal time series [PIAR] need a periodic differencing filter to remove the stochastic trend. On the other

  18. AN EXPERT SYSTEM MODEL FOR THE SELECTION OF TECHNICAL PERSONNEL

    Directory of Open Access Journals (Sweden)

    Emine COŞGUN

    2005-03-01

    Full Text Available In this study, a model has been developed for the selection of the technical personnel. In the model Visual Basic has been used as user interface, Microsoft Access has been utilized as database system and CLIPS program has been used as expert system program. The proposed model has been developed by utilizing expert system technology. In the personnel selection process, only the pre-evaluation of the applicants has been taken into consideration. Instead of replacing the expert himself, a decision support program has been developed to analyze the data gathered from the job application forms. The attached study will assist the expert to make faster and more accurate decisions.

  19. A recruitment and selection process model: the case of the Department of Justice and Constitutional Development

    OpenAIRE

    Thebe, T P; 12330841 - Van der Waldt, Gerrit

    2014-01-01

    The purpose of this article is to report on findings of an empirical investigation conducted at the Department of Justice and Constitutional Development. The aim of the investigation was to ascertain the status of current practices and challenges regarding the processes and procedures utilised for recruitment and selection. Based on these findings the article further outlines the design of a comprehensive process model for human resource recruitment and selection for the Department. The model...

  20. A Molecular Selection Index Method Based on Eigenanalysis

    Science.gov (United States)

    Cerón-Rojas, J. Jesús; Castillo-González, Fernando; Sahagún-Castellanos, Jaime; Santacruz-Varela, Amalio; Benítez-Riquelme, Ignacio; Crossa, José

    2008-01-01

    The traditional molecular selection index (MSI) employed in marker-assisted selection maximizes the selection response by combining information on molecular markers linked to quantitative trait loci (QTL) and phenotypic values of the traits of the individuals of interest. This study proposes an MSI based on an eigenanalysis method (molecular eigen selection index method, MESIM), where the first eigenvector is used as a selection index criterion, and its elements determine the proportion of the trait's contribution to the selection index. This article develops the theoretical framework of MESIM. Simulation results show that the genotypic means and the expected selection response from MESIM for each trait are equal to or greater than those from the traditional MSI. When several traits are simultaneously selected, MESIM performs well for traits with relatively low heritability. The main advantages of MESIM over the traditional molecular selection index are that its statistical sampling properties are known and that it does not require economic weights and thus can be used in practical applications when all or some of the traits need to be improved simultaneously. PMID:18716338