Entropic criterion for model selection
Tseng, Chih-Yuan
2006-10-01
Model or variable selection is usually achieved through ranking models according to the increasing order of preference. One of methods is applying Kullback-Leibler distance or relative entropy as a selection criterion. Yet that will raise two questions, why use this criterion and are there any other criteria. Besides, conventional approaches require a reference prior, which is usually difficult to get. Following the logic of inductive inference proposed by Caticha [Relative entropy and inductive inference, in: G. Erickson, Y. Zhai (Eds.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP Conference Proceedings, vol. 707, 2004 (available from arXiv.org/abs/physics/0311093)], we show relative entropy to be a unique criterion, which requires no prior information and can be applied to different fields. We examine this criterion by considering a physical problem, simple fluids, and results are promising.
Vrieze, Scott I.
2012-01-01
This article reviews the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in model selection and the appraisal of psychological theory. The focus is on latent variable models, given their growing use in theory testing and construction. Theoretical statistical results in regression are discussed, and more important…
Marker selection by Akaike information criterion and Bayesian information criterion.
Li, W; Nyholt, D R
2001-01-01
We carried out a discriminant analysis with identity by descent (IBD) at each marker as inputs, and the sib pair type (affected-affected versus affected-unaffected) as the output. Using simple logistic regression for this discriminant analysis, we illustrate the importance of comparing models with different number of parameters. Such model comparisons are best carried out using either the Akaike information criterion (AIC) or the Bayesian information criterion (BIC). When AIC (or BIC) stepwise variable selection was applied to the German Asthma data set, a group of markers were selected which provide the best fit to the data (assuming an additive effect). Interestingly, these 25-26 markers were not identical to those with the highest (in magnitude) single-locus lod scores.
Posada, David; Buckley, Thomas R
2004-10-01
Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).
Evans, Jason; Sullivan, Jack
2011-01-01
A priori selection of models for use in phylogeny estimation from molecular sequence data is increasingly important as the number and complexity of available models increases. The Bayesian information criterion (BIC) and the derivative decision-theoretic (DT) approaches rely on a conservative approximation to estimate the posterior probability of a given model. Here, we extended the DT method by using reversible jump Markov chain Monte Carlo approaches to directly estimate model probabilities for an extended candidate pool of all 406 special cases of the general time reversible + Γ family. We analyzed 250 diverse data sets in order to evaluate the effectiveness of the BIC approximation for model selection under the BIC and DT approaches. Model choice under DT differed between the BIC approximation and direct estimation methods for 45% of the data sets (113/250), and differing model choice resulted in significantly different sets of trees in the posterior distributions for 26% of the data sets (64/250). The model with the lowest BIC score differed from the model with the highest posterior probability in 30% of the data sets (76/250). When the data indicate a clear model preference, the BIC approximation works well enough to result in the same model selection as with directly estimated model probabilities, but a substantial proportion of biological data sets lack this characteristic, which leads to selection of underparametrized models.
Regularization Parameter Selections via Generalized Information Criterion.
Zhang, Yiyun; Li, Runze; Tsai, Chih-Ling
2010-03-01
We apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrinkage estimators. This approach relies heavily on the choice of regularization parameter, which controls the model complexity. In this paper, we propose employing the generalized information criterion (GIC), encompassing the commonly used Akaike information criterion (AIC) and Bayesian information criterion (BIC), for selecting the regularization parameter. Our proposal makes a connection between the classical variable selection criteria and the regularization parameter selections for the nonconcave penalized likelihood approaches. We show that the BIC-type selector enables identification of the true model consistently, and the resulting estimator possesses the oracle property in the terminology of Fan and Li (2001). In contrast, however, the AIC-type selector tends to overfit with positive probability. We further show that the AIC-type selector is asymptotically loss efficient, while the BIC-type selector is not. Our simulation results confirm these theoretical findings, and an empirical example is presented. Some technical proofs are given in the online supplementary material.
Ball, R D
2001-11-01
We describe an approximate method for the analysis of quantitative trait loci (QTL) based on model selection from multiple regression models with trait values regressed on marker genotypes, using a modification of the easily calculated Bayesian information criterion to estimate the posterior probability of models with various subsets of markers as variables. The BIC-delta criterion, with the parameter delta increasing the penalty for additional variables in a model, is further modified to incorporate prior information, and missing values are handled by multiple imputation. Marginal probabilities for model sizes are calculated, and the posterior probability of nonzero model size is interpreted as the posterior probability of existence of a QTL linked to one or more markers. The method is demonstrated on analysis of associations between wood density and markers on two linkage groups in Pinus radiata. Selection bias, which is the bias that results from using the same data to both select the variables in a model and estimate the coefficients, is shown to be a problem for commonly used non-Bayesian methods for QTL mapping, which do not average over alternative possible models that are consistent with the data.
Passos, Valeria Lima; Berger, Martijn P. F.; Tan, Frans E. S.
2008-01-01
During the early stage of computerized adaptive testing (CAT), item selection criteria based on Fisher"s information often produce less stable latent trait estimates than the Kullback-Leibler global information criterion. Robustness against early stage instability has been reported for the D-optimality criterion in a polytomous CAT with the…
A focused information criterion for graphical models
Pircalabelu, E.; Claeskens, G.; Waldorp, L.
2015-01-01
A new method for model selection for Gaussian Bayesian networks and Markov networks, with extensions towards ancestral graphs, is constructed to have good mean squared error properties. The method is based on the focused information criterion, and offers the possibility of fitting individual-tailore
Bayesian information criterion for censored survival models.
Volinsky, C T; Raftery, A E
2000-03-01
We investigate the Bayesian Information Criterion (BIC) for variable selection in models for censored survival data. Kass and Wasserman (1995, Journal of the American Statistical Association 90, 928-934) showed that BIC provides a close approximation to the Bayes factor when a unit-information prior on the parameter space is used. We propose a revision of the penalty term in BIC so that it is defined in terms of the number of uncensored events instead of the number of observations. For a simple censored data model, this revision results in a better approximation to the exact Bayes factor based on a conjugate unit-information prior. In the Cox proportional hazards regression model, we propose defining BIC in terms of the maximized partial likelihood. Using the number of deaths rather than the number of individuals in the BIC penalty term corresponds to a more realistic prior on the parameter space and is shown to improve predictive performance for assessing stroke risk in the Cardiovascular Health Study.
An information criterion for marginal structural models.
Platt, Robert W; Brookhart, M Alan; Cole, Stephen R; Westreich, Daniel; Schisterman, Enrique F
2013-04-15
Marginal structural models were developed as a semiparametric alternative to the G-computation formula to estimate causal effects of exposures. In practice, these models are often specified using parametric regression models. As such, the usual conventions regarding regression model specification apply. This paper outlines strategies for marginal structural model specification and considerations for the functional form of the exposure metric in the final structural model. We propose a quasi-likelihood information criterion adapted from use in generalized estimating equations. We evaluate the properties of our proposed information criterion using a limited simulation study. We illustrate our approach using two empirical examples. In the first example, we use data from a randomized breastfeeding promotion trial to estimate the effect of breastfeeding duration on infant weight at 1 year. In the second example, we use data from two prospective cohorts studies to estimate the effect of highly active antiretroviral therapy on CD4 count in an observational cohort of HIV-infected men and women. The marginal structural model specified should reflect the scientific question being addressed but can also assist in exploration of other plausible and closely related questions. In marginal structural models, as in any regression setting, correct inference depends on correct model specification. Our proposed information criterion provides a formal method for comparing model fit for different specifications.
An important reference criterion for the selection of GSSP
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
A study on the relationship between biostratigraphy and sequence stratigraphy in several designated global boundary stratotypes shows that the best way may be to take the GSSP at a point coincident with the base of the first widespread Leading Group biozone above the first flooding surface (FFS) of the relevant third-order sequence.It is suggested that the first flooding surface of the sequence should be an important reference criterion for the selection of GSSP.As the base of the first widespread Leading Group biozone chosen for the definition of GSSP could not be lower than the first flooding surface of the referred sequence,the latter surface may be an important criterion for the recognition and correlation of the chronostratigraphic boundaries.
Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms
Directory of Open Access Journals (Sweden)
Fernando A. Auat Cheein
2013-01-01
Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.
Directory of Open Access Journals (Sweden)
Mohammad Reza Marami Milani
2016-07-01
Full Text Available This study focuses on multiple linear regression models relating six climate indices (temperature humidity THI, environmental stress ESI, equivalent temperature index ETI, heat load HLI, modified HLI (HLI new, and respiratory rate predictor RRP with three main components of cow’s milk (yield, fat, and protein for cows in Iran. The least absolute shrinkage selection operator (LASSO and the Akaike information criterion (AIC techniques are applied to select the best model for milk predictands with the smallest number of climate predictors. Uncertainty estimation is employed by applying bootstrapping through resampling. Cross validation is used to avoid over-fitting. Climatic parameters are calculated from the NASA-MERRA global atmospheric reanalysis. Milk data for the months from April to September, 2002 to 2010 are used. The best linear regression models are found in spring between milk yield as the predictand and THI, ESI, ETI, HLI, and RRP as predictors with p-value < 0.001 and R2 (0.50, 0.49 respectively. In summer, milk yield with independent variables of THI, ETI, and ESI show the highest relation (p-value < 0.001 with R2 (0.69. For fat and protein the results are only marginal. This method is suggested for the impact studies of climate variability/change on agriculture and food science fields when short-time series or data with large uncertainty are available.
A new variant selection criterion for twin variants in titanium alloys. Pt. 2
Energy Technology Data Exchange (ETDEWEB)
Schuman, Christophe [Laboratoire d' Etude des Microstructures et de Mecanique des Materiaux, LEM3, CNRS 7239, Universite Paul Verlaine - Metz, Ile du Saulcy, Metz (France); Bao, Lei; Lecomte, Jean Sebastien; Zhang, Yudong; Raulot, Jean Marc; Philippe, Marie Jeanne; Esling, Claude [LEM3, CNRS 7239, Universite Paul Verlaine - Metz, Ile du Saulcy, Metz (France)
2012-05-15
A new selection criterion to explain the activation of the twinning variant is proposed. This criterion is based on the calculation of the deformation energy to create a primary twin. The calculation takes into account the effect of the grain size using a Hall-Petch type relation. This criterion allows to obtain a very good prediction for the twin family selection and twin variant selection. The calculations are compared with the experimental results obtained on T40 (ASTM grade 2) deformed by Channel Die compression. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Bayesian Case-deletion Model Complexity and Information Criterion.
Zhu, Hongtu; Ibrahim, Joseph G; Chen, Qingxia
2014-10-01
We establish a connection between Bayesian case influence measures for assessing the influence of individual observations and Bayesian predictive methods for evaluating the predictive performance of a model and comparing different models fitted to the same dataset. Based on such a connection, we formally propose a new set of Bayesian case-deletion model complexity (BCMC) measures for quantifying the effective number of parameters in a given statistical model. Its properties in linear models are explored. Adding some functions of BCMC to a conditional deviance function leads to a Bayesian case-deletion information criterion (BCIC) for comparing models. We systematically investigate some properties of BCIC and its connection with other information criteria, such as the Deviance Information Criterion (DIC). We illustrate the proposed methodology on linear mixed models with simulations and a real data example.
2011-06-09
... the spinal cord and subsequent human physical activity and movement. Discussion: We are establishing... spinal cord and subsequent physical activity and movement, as suggested by the commenter. Changes: None... Final Priorities and Selection Criterion; National Institute on Disability and Rehabilitation...
Wheeler, David C; Hickson, Demarc A; Waller, Lance A
2010-06-01
Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data.
Interface Pattern Selection Criterion for Cellular Structures in Directional Solidification
Trivedi, R.; Tewari, S. N.; Kurtze, D.
1999-01-01
The aim of this investigation is to establish key scientific concepts that govern the selection of cellular and dendritic patterns during the directional solidification of alloys. We shall first address scientific concepts that are crucial in the selection of interface patterns. Next, the results of ground-based experimental studies in the Al-4.0 wt % Cu system will be described. Both experimental studies and theoretical calculations will be presented to establish the need for microgravity experiments.
Exclusion as a Criterion for Selecting Socially Vulnerable Population Groups
Directory of Open Access Journals (Sweden)
Aleksandra Anatol’evna Shabunova
2016-05-01
Full Text Available The article considers theoretical aspects of a scientific research “The Mechanisms for Overcoming Mental Barriers of Inclusion of Socially Vulnerable Categories of the Population for the Purpose of Intensifying Modernization in the Regional Community” (RSF grant No. 16-18-00078. The authors analyze the essence of the category of “socially vulnerable groups” from the legal, economic and sociological perspectives. The paper shows that the economic approach that uses the criterion “the level of income and accumulated assets” when defining vulnerable population groups prevails in public administration practice. The legal field of the category based on the economic approach is defined by the concept of “the poor and socially unprotected categories of citizens”. With the help of the analysis of theoretical and methodological aspects of this issue, the authors show that these criteria are a necessary but not sufficient condition for classifying the population as being socially vulnerable. Foreign literature associates the phenomenon of vulnerability with the concept of risks, with the possibility of households responding to them and with the likelihood of losing the well-being (poverty theory; research areas related to the means of subsistence, etc.. The asset-based approaches relate vulnerability to the poverty that arises due to lack of access to tangible and intangible assets. Sociological theories presented by the concept of social exclusion pay much attention to the breakdown of social ties as a source of vulnerability. The essence of social exclusion consists in the inability of people to participate in important aspects of social life (in politics, labor markets, education and healthcare, cultural life, etc. though they have all the rights to do so. The difference between the concepts of exclusion and poverty is manifested in the displacement of emphasis from income inequality to limited access to rights. Social exclusion is
Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging
Directory of Open Access Journals (Sweden)
Naoya Sueishi
2013-07-01
Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.
Adjustment Criterion and Algorithm in Adjustment Model with Uncertain
Directory of Open Access Journals (Sweden)
SONG Yingchun
2015-02-01
Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.
Model Selection for Geostatistical Models
Energy Technology Data Exchange (ETDEWEB)
Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.
2006-02-01
We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.
Accuracy of a selection criterion for glass forming ability in the Ni–Nb–Zr system
Energy Technology Data Exchange (ETDEWEB)
Déo, L.P., E-mail: leonardopratavieira@gmail.com; Oliveira, M.F. de, E-mail: falcao@sc.usp.br
2014-12-05
Highlights: • We applied a selection in the Ni–Nb–Zr system to find alloys with high GFA. • We used the thermal parameter γ{sub m} to evaluate the GFA of alloys. • The correlation between the γ{sub m} parameter and R{sub c} in the studied system is poor. • The effect of oxygen impurity reduced dramatically the GFA of alloys. • Unknown intermetallic compounds reduced the accuracy of the criterion. - Abstract: Several theories have been developed and applied in metallic systems in order to find the best stoichiometries with high glass forming ability; however there is no universal theory to predict the glass forming ability in metallic systems. Recently a selection criterion was applied in the Zr–Ni–Cu system and it was found some correlation between experimental and theoretical data. This criterion correlates critical cooling rate for glass formation with topological instability of stable crystalline structures; average work function difference and average electron density difference among the constituent elements of the alloy. In the present work, this criterion was applied in the Ni–Nb–Zr system. It was investigated the influence of factors not considered in the calculation and on the accuracy of the criterion, such as unknown intermetallic compounds and oxygen contamination. Bulk amorphous specimens were produced by injection casting. The amorphous nature was analyzed by X-ray diffraction and differential scanning calorimetry; oxygen contamination was quantified by the inert gas fusion method.
Energy Technology Data Exchange (ETDEWEB)
Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)
2015-04-15
Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.
Focused information criterion and model averaging based on weighted composite quantile regression
Xu, Ganggang
2013-08-13
We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..
Huang, H. E.; Liang, C. P.; Jang, C. S.; Chen, J. S.
2015-12-01
Land subsidence due to groundwater exploitation is an urgent environmental problem in Choushui river alluvial fan in Taiwan. Aquifer storage and recovery (ASR), where excess surface water is injected into subsurface aquifers for later recovery, is one promising strategy for managing surplus water and may overcome water shortages. The performance of an ASR scheme is generally evaluated in terms of recovery efficiency, which is defined as percentage of water injected in to a system in an ASR site that fulfills the targeted water quality criterion. Site selection of an ASR scheme typically faces great challenges, due to the spatial variability of groundwater quality and hydrogeological condition. This study proposes a novel method for the ASR site selection based on drinking quality criterion. Simplified groundwater flow and contaminant transport model spatial distributions of the recovery efficiency with the help of the groundwater quality, hydrological condition, ASR operation. The results of this study may provide government administrator for establishing reliable ASR scheme.
Wavelength selection in injection-driven Hele-Shaw flows: A maximum amplitude criterion
Dias, Eduardo; Miranda, Jose
2013-11-01
As in most interfacial flow problems, the standard theoretical procedure to establish wavelength selection in the viscous fingering instability is to maximize the linear growth rate. However, there are important discrepancies between previous theoretical predictions and existing experimental data. In this work we perform a linear stability analysis of the radial Hele-Shaw flow system that takes into account the combined action of viscous normal stresses and wetting effects. Most importantly, we introduce an alternative selection criterion for which the selected wavelength is determined by the maximum of the interfacial perturbation amplitude. The effectiveness of such a criterion is substantiated by the significantly improved agreement between theory and experiments. We thank CNPq (Brazilian Sponsor) for financial support.
Partial dynamical symmetry as a selection criterion for many-body interactions
Leviatan, A; Van Isacker, P
2013-01-01
We propose the use of partial dynamical symmetry (PDS) as a selection criterion for higher-order terms in situations when a prescribed symmetry is obeyed by some states and is strongly broken in others. The procedure is demonstrated in a first systematic classification of many-body interactions with SU(3) PDS that can improve the description of deformed nuclei. As an example, the triaxial features of the nucleus 156Gd are analyzed.
The optimization of diffraction structures based on the principle selection of the main criterion
Kravets, O.; Beletskaja, S.; Lvovich, Ya; Lvovich, I.; Choporov, O.; Preobrazhenskiy, A.
2017-02-01
The possibilities of optimizing the characteristics of diffractive structures are analysed. A functional block diagram of a subsystem of diffractive structure optimization is shown. Next, a description of the method for the multicriterion optimization of diffractive structures is given. We then consider an algorithm for selecting the main criterion in the process of optimization. The algorithm efficiency is confirmed by an example of optimization of the diffractive structure.
Lin, Yi-Kuei; Yeh, Cheng-Ta
2013-05-01
From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.
Decision models for use with criterion-referenced tests
van der Linden, Willem J.
1980-01-01
The problem of mastery decisions and optimizing cutoff scores on criterion-referenced tests is considered. This problem can be formalized as an (empirical) Bayes problem with decisions rules of a monotone shape. Next, the derivation of optimal cutoff scores for threshold, linear, and normal ogive lo
Model Selection Principles in Misspecified Models
Lv, Jinchi
2010-01-01
Model selection is of fundamental importance to high dimensional modeling featured in many contemporary applications. Classical principles of model selection include the Kullback-Leibler divergence principle and the Bayesian principle, which lead to the Akaike information criterion and Bayesian information criterion when models are correctly specified. Yet model misspecification is unavoidable when we have no knowledge of the true model or when we have the correct family of distributions but miss some true predictor. In this paper, we propose a family of semi-Bayesian principles for model selection in misspecified models, which combine the strengths of the two well-known principles. We derive asymptotic expansions of the semi-Bayesian principles in misspecified generalized linear models, which give the new semi-Bayesian information criteria (SIC). A specific form of SIC admits a natural decomposition into the negative maximum quasi-log-likelihood, a penalty on model dimensionality, and a penalty on model miss...
SNP sets selection under mutual information criterion, application to F7/FVII dataset.
Brunel, H; Perera, A; Buil, A; Sabater-Lleal, M; Souto, J C; Fontcuberta, J; Vallverdu, M; Soria, J M; Caminal, P
2008-01-01
One of the main goals of human genetics is to find genetic markers related to complex diseases. In blood coagulation process, it is known that genetic variability in F7 gene is the most responsible for observed variations in FVII levels in blood. In this work, we propose a method for selecting sets of Single Nucleotide Polymorphisms (SNPs) significantly correlated with a phenotype (FVII levels). This method employs a feature selection algorithm (variant of Sequential Forward Selection, SFS) based on a criterion of statistical significance of a mutual information functional. This algorithm is applied to a sample of independent individuals from the GAIT project. Main SNPs found by the algorithm are in correspondence with previous results published using family-based techniques.
Evaluation of Regression Models of Balance Calibration Data Using an Empirical Criterion
Ulbrich, Norbert; Volden, Thomas R.
2012-01-01
An empirical criterion for assessing the significance of individual terms of regression models of wind tunnel strain gage balance outputs is evaluated. The criterion is based on the percent contribution of a regression model term. It considers a term to be significant if its percent contribution exceeds the empirical threshold of 0.05%. The criterion has the advantage that it can easily be computed using the regression coefficients of the gage outputs and the load capacities of the balance. First, a definition of the empirical criterion is provided. Then, it is compared with an alternate statistical criterion that is widely used in regression analysis. Finally, calibration data sets from a variety of balances are used to illustrate the connection between the empirical and the statistical criterion. A review of these results indicated that the empirical criterion seems to be suitable for a crude assessment of the significance of a regression model term as the boundary between a significant and an insignificant term cannot be defined very well. Therefore, regression model term reduction should only be performed by using the more universally applicable statistical criterion.
A Criterion for Rating the Usability and Accuracy of the One-Diode Models for Photovoltaic Modules
Aldo Orioli; Alessandra Di Gangi
2016-01-01
In selecting a mathematical model for simulating physical behaviours, it is important to reach an acceptable compromise between analytical complexity and achievable precision. With the aim of helping researchers and designers working in the area of photovoltaic systems to make a choice among the numerous diode-based models, a criterion for rating both the usability and accuracy of one-diode models is proposed in this paper. A three-level rating scale, which considers the ease of finding the d...
A new elliptic-parabolic yield surface model revised by an adaptive criterion for granular soils
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
An adaptive criterion for shear yielding as well as shear failure of soils is proposed in this paper to address the fact that most criteria,including the Mohr-Coulomb criterion,the Lade criterion and the Matsuoka-Nakai criterion,cannot agree well with the experimental results when the value of the intermediate principal stress parameter is too big.The new criterion can adjust an adaptive parameter based on the experimental results in order to make the theoretical calculations fit the test results more accurately.The original elliptic-parabolic yield surface model can capture both soil contraction and dilation behaviors.However,it normally over-predicts the soil strength due to its application of the Extended Mises criterion.A new elliptic-parabolic yield surface mode is presented in this paper,which introduces the adaptive criterion in three-dimensional principal stress space.The new model can well model the stress-strain behavior of soils under general stress conditions.Compared to the original model which can only simulate soil behavior under triaxial compression conditions,the new model can simulate soil behaviors under both triaxial compression conditions and general stress conditions.
Comparison between cohesive zone models and a coupled criterion for prediction of edge debonding
Vandellos, T.; Martin, E.; Leguillon, D.
2014-01-01
International audience; The onset of edge debonding within a bonded specimen submitted to bending is modeled with two numerical approaches: the coupled criterion and the cohesive zone model. The comparison of the results obtained with the both approaches evidences that (i) the prediction of edge debonding strongly depends on the shape of the cohesive law and (ii) the trapezoidal cohesive law is the most relevant model to predict the edge debonding as compared with the coupled criterion.
Improved similarity criterion for seepage erosion using mesoscopic coupled PFC-CFD model
Institute of Scientific and Technical Information of China (English)
倪小东; 王媛; 陈珂; 赵帅龙
2015-01-01
Conventional model tests and centrifuge tests are frequently used to investigate seepage erosion. However, the centrifugal test method may not be efficient according to the results of hydraulic conductivity tests and piping erosion tests. The reason why seepage deformation in model tests may deviate from similarity was first discussed in this work. Then, the similarity criterion for seepage deformation in porous media was improved based on the extended Darcy-Brinkman-Forchheimer equation. Finally, the coupled particle flow code–computational fluid dynamics (PFC−CFD) model at the mesoscopic level was proposed to verify the derived similarity criterion. The proposed model maximizes its potential to simulate seepage erosion via the discrete element method and satisfy the similarity criterion by adjusting particle size. The numerical simulations achieved identical results with the prototype, thus indicating that the PFC−CFD model that satisfies the improved similarity criterion can accurately reproduce the processes of seepage erosion at the mesoscopic level.
Bayesian Model Selection and Statistical Modeling
Ando, Tomohiro
2010-01-01
Bayesian model selection is a fundamental part of the Bayesian statistical modeling process. The quality of these solutions usually depends on the goodness of the constructed Bayesian model. Realizing how crucial this issue is, many researchers and practitioners have been extensively investigating the Bayesian model selection problem. This book provides comprehensive explanations of the concepts and derivations of the Bayesian approach for model selection and related criteria, including the Bayes factor, the Bayesian information criterion (BIC), the generalized BIC, and the pseudo marginal lik
A robust circle criterion observer with application to neural mass models
2012-01-01
International audience; A robust circle criterion observer is designed and applied to neural mass models. At present, no existing circle criterion observers apply to the considered models, i.e. the required linear matrix inequality is infeasible. Therefore, we generalise available results to derive a suitable estimation algorithm. Additionally, the design also takes into account input uncertainty and measurement noise. We show how to apply the observer to estimate the mean membrane potential ...
Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update
Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; van den Berg, Stéphanie Martine
2017-01-01
Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the
Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update
Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.
2016-01-01
Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the
Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update
Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.
2017-01-01
Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the
Beretvas, S. Natasha; Murphy, Daniel L.
2013-01-01
The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…
Toropova, Alla P; Toropov, Andrey A
2017-05-15
New criterion of the predictive potential of quantitative structure-property/activity relationships (QSPRs/QSARs) is suggested. This criterion is calculated with utilization of the correlation coefficient between experimental and calculated values of endpoint for the calibration set, with taking into account the positive and negative dispersions between experimental and calculated values. The utilization of this criterion improves the predictive potential of QSAR models of dermal permeability coefficient, logKp (cm/h). Copyright © 2017 Elsevier B.V. All rights reserved.
Bridging AIC and BIC: a new criterion for autoregression
Jie Ding; Vahid Tarokh; Yuhong Yang
2015-01-01
We introduce a new criterion to determine the order of an autoregressive model fitted to time series data. It has the benefits of the two well-known model selection techniques, the Akaike information criterion and the Bayesian information criterion. When the data is generated from a finite order autoregression, the Bayesian information criterion is known to be consistent, and so is the new criterion. When the true order is infinity or suitably high with respect to the sample size, the Akaike ...
Fang, L.; Sun, X. Y.; Liu, Y. W.
2016-12-01
In order to shed light on understanding the subgrid-scale (SGS) modelling methodology, we analyze and define the concepts of assumption and restriction in the modelling procedure, then show by a generalized derivation that if there are multiple stationary restrictions in a modelling, the corresponding assumption function must satisfy a criterion of orthogonality. Numerical tests using one-dimensional nonlinear advection equation are performed to validate this criterion. This study is expected to inspire future research on generally guiding the SGS modelling methodology.
Acquah Henry de-Graft
2012-01-01
This study addresses the problem of model selection in asymmetric price transmission models by combining the use of bootstrap methods with information theoretic selection criteria. Subsequently, parametric bootstrap technique is used to select the best model according to Akaike’s Information Criteria (AIC) and Bayesian Information Criteria (BIC). Bootstrap simulation results indicated that the performances of AIC and BIC are affected by the size of the data...
Directory of Open Access Journals (Sweden)
Acquah Henry de-Graft
2012-01-01
Full Text Available This study addresses the problem of model selection in asymmetric price transmission models by combining the use of bootstrap methods with information theoretic selection criteria. Subsequently, parametric bootstrap technique is used to select the best model according to Akaike’s Information Criteria (AIC and Bayesian Information Criteria (BIC. Bootstrap simulation results indicated that the performances of AIC and BIC are affected by the size of the data, the level of asymmetry and the amount of noise in the model used in the application. This study further establishes that the BIC is consistent and outperforms AIC in selecting the correct asymmetric price relationship when the bootstrap sample size is large.
Asada, Tetsuhiro
2013-06-01
The plane of symmetric plant cell division tends to be selected so that the new cross-wall halving the cell volume has the least possible area, and several cases of such selection are best represented by a recently formulated model which promotes the view that the strength of the least area tendency is the only criterion for selecting the plane. To test this model, the present study examined the divisions of two types of shape-standardized tobacco BY-2 cell, oblate-spheroidal (os) cells prepared from protoplasts and spheri-cylindrical (sc) cells with unusual double-wall structures prepared from plasmolyzed cells. Measurements of cell shape parameters and division angles revealed that both cell types most frequently divide nearly along their short axes. While os cells did not exhibit any other division angle bias, sc cell division was characterized by another bias which made the frequency of longitudinal divisions secondarily high. The geometry of sc cells barely allows the longitudinal cross-walls to have locally minimum areas. Nevertheless, a comparison of detected and hypothetical standard divisions indicates that the frequency of longitudinal sc cell division can be significantly higher than that predicted when the longitudinal cross-walls are assumed to have locally minimum areas smaller than their original areas. These results suggest that, even in isolated plant cell types, the strength of the least area tendency is not the only criterion for selecting the division plane. The possibility that there is another basic, though often hidden, criterion is discussed.
Model selection and comparison for independents sinusoids
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2014-01-01
this method by considering the problem in a full Bayesian framework instead of the approximate formulation, on which the asymptotic MAP criterion is based. This leads to a new model selection and comparison method, the lp-BIC, whose computational complexity is of the same order as the asymptotic MAP criterion......In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve....... Through simulations, we demonstrate that the lp-BIC outperforms the asymptotic MAP criterion and other state of the art methods in terms of model selection, de-noising and prediction performance. The simulation code is available online....
A novel criterion for determination of material model parameters
Andrade-Campos, A.; de-Carvalho, R.; Valente, R. A. F.
2011-05-01
Parameter identification problems have emerged due to the increasing demanding of precision in the numerical results obtained by Finite Element Method (FEM) software. High result precision can only be obtained with confident input data and robust numerical techniques. The determination of parameters should always be performed confronting numerical and experimental results leading to the minimum difference between them. However, the success of this task is dependent of the specification of the cost/objective function, defined as the difference between the experimental and the numerical results. Recently, various objective functions have been formulated to assess the errors between the experimental and computed data (Lin et al., 2002; Cao and Lin, 2008; among others). The objective functions should be able to efficiently lead the optimisation process. An ideal objective function should have the following properties: (i) all the experimental data points on the curve and all experimental curves should have equal opportunity to be optimised; and (ii) different units and/or the number of curves in each sub-objective should not affect the overall performance of the fitting. These two criteria should be achieved without manually choosing the weighting factors. However, for some non-analytical specific problems, this is very difficult in practice. Null values of experimental or numerical values also turns the task difficult. In this work, a novel objective function for constitutive model parameter identification is presented. It is a generalization of the work of Cao and Lin and it is suitable for all kinds of constitutive models and mechanical tests, including cyclic tests and Baushinger tests with null values.
A Bayesian outlier criterion to detect SNPs under selection in large data sets.
Directory of Open Access Journals (Sweden)
Mathieu Gautier
Full Text Available BACKGROUND: The recent advent of high-throughput SNP genotyping technologies has opened new avenues of research for population genetics. In particular, a growing interest in the identification of footprints of selection, based on genome scans for adaptive differentiation, has emerged. METHODOLOGY/PRINCIPAL FINDINGS: The purpose of this study is to develop an efficient model-based approach to perform bayesian exploratory analyses for adaptive differentiation in very large SNP data sets. The basic idea is to start with a very simple model for neutral loci that is easy to implement under a bayesian framework and to identify selected loci as outliers via Posterior Predictive P-values (PPP-values. Applications of this strategy are considered using two different statistical models. The first one was initially interpreted in the context of populations evolving respectively under pure genetic drift from a common ancestral population while the second one relies on populations under migration-drift equilibrium. Robustness and power of the two resulting bayesian model-based approaches to detect SNP under selection are further evaluated through extensive simulations. An application to a cattle data set is also provided. CONCLUSIONS/SIGNIFICANCE: The procedure described turns out to be much faster than former bayesian approaches and also reasonably efficient especially to detect loci under positive selection.
Institute of Scientific and Technical Information of China (English)
TENG Hong-Hui; JIANG Zong-Lin
2011-01-01
@@ One-dimensional detonation waves are simulated with the three-step chain branching reaction model, and the instability criterion is studied.The ratio of the induction zone length and the reaction zone length may be used to decide the instability, and the detonation becomes unstable with the high ratio.However, the ratio is not invariable with different heat release values.The critical ratio, corresponding to the transition from the stable detonation to the unstable detonation, has a negative correlation with the heat release.An empirical relation of the Chapman-Jouguet Mach number and the length ratio is proposed as the instability criterion.
The index of ideality of correlation: A criterion of predictive potential of QSPR/QSAR models?
Toropov, Andrey A; Toropova, Alla P
2017-07-01
The index of ideality of correlation (IIC) is a new criterion of the predictive potential of quantitative structure-property/activity relationships (QSPRs/QSARs). This IIC is calculated with using of the correlation coefficient between experimental and calculated values of endpoint for the calibration set, with taking into account the positive and negative dispersions between experimental and calculated values. The mutagenicity is well-known important characteristic of substances from ecological point of view. Consequently, the estimation of the IIC for mutagenicity is well motivated. It is confirmed that the utilization of this criterion significantly improves the predictive potential of QSAR models of mutagenicity. The new criterion can be used for other endpoints. Copyright © 2017 Elsevier B.V. All rights reserved.
Canopy Temperature Depression as a Potential Selection Criterion for Drought Resistance in Wheat
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
(R2=0.46-0.58) conditions. These results clearly indicated grain yield and water stress can be predicted by taking CTD values in field, which can be used by breeding programs as a potential selection criterion for grain yield and drought resistance in wheat, but a second study year is needed to confirm further.
The Weierstrass Criterion and the Lemaitre-Tolman-Bondi Models with Cosmological Constant \\lambda
Bochicchio, Ivana; Laserra, Ettore
2011-01-01
We analyze Lemaitre-Tolman-Bondi models in presence of the cosmological constant \\Lambda through the classical Weierstrass criterion. Precisely, we show that the Weierstrass approach allows us to classify the dynamics of these inhomogeneous spherically symmetric Universes taking into account their relationship with the sign of \\Lambda.
A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik;
2014-01-01
Stochastic linear systems arise in a large number of control applications. This paper presents a mean-variance criterion for economic model predictive control (EMPC) of such systems. The system operating cost and its variance is approximated based on a Monte-Carlo approach. Using convex relaxation...
Learning image based surrogate relevance criterion for atlas selection in segmentation
Zhao, Tingting; Ruan, Dan
2016-06-01
Picking geometrically relevant atlases from the whole training set is crucial to multi-atlas based image segmentation, especially with extensive data of heterogeneous quality in the Big Data era. Unfortunately, there is very limited understanding of how currently used image similarity criteria reveal geometric relevance, let alone the optimization of them. This paper aims to develop a good image based surrogate relevance criterion to best reflect the underlying inaccessible geometric relevance in a learning context. We cast this surrogate learning problem into an optimization framework, by encouraging the image based surrogate to behave consistently with geometric relevance during training. In particular, we desire a criterion to be small for image pairs with similar geometry and large for those with significantly different segmentation geometry. Validation experiments on corpus callosum segmentation demonstrate the improved quality of the learned surrogate compared to benchmark surrogate candidates.
Directory of Open Access Journals (Sweden)
Ochrana František
2015-06-01
Full Text Available Through the institute of public procurement a considerable volume of financial resources is allocated. It is therefore in the interest of contracting entities to seek ways of how to achieve an efficient allocation of resources. Some public contract-awarding entities, along with some public-administration authorities in the Czech Republic, believe that the use of a single evaluation criterion (the lowest bid price results in a more efficient tender for a public contract. It was found that contracting entities in the Czech Republic strongly prefer to use the lowest bid price criterion. Within the examined sample, 86.5 % of public procurements were evaluated this way. The analysis of the examined sample of public contracts proved that the choice of an evaluation criterion, even the preference of the lowest bid price criterion, does not have any obvious impact on the final cost of a public contract. The study concludes that it is inappropriate to prefer the criterion of the lowest bid price within the evaluation of public contracts that are characterised by their complexity (including public contracts for construction works and public service contracts. The findings of the Supreme Audit Office related to the inspection of public contracts indicate that when using the lowest bid price as an evaluation criterion, a public contract may indeed be tendered with the lowest bid price, but not necessarily the best offer in terms of supplied quality. It is therefore not appropriate to use the lowest bid price evaluation criterion to such an extent for the purpose of evaluating work and services. Any improvement to this situation requires a corresponding amendment to the Law on Public Contracts and mainly a radical change in the attitude of the Office for the Protection of Competition towards proposed changes, as indicated within the conclusions and recommendations proposed by this study.
A termination criterion for parameter estimation in stochastic models in systems biology.
Zimmer, Christoph; Sahle, Sven
2015-11-01
Parameter estimation procedures are a central aspect of modeling approaches in systems biology. They are often computationally expensive, especially when the models take stochasticity into account. Typically parameter estimation involves the iterative optimization of an objective function that describes how well the model fits some measured data with a certain set of parameter values. In order to limit the computational expenses it is therefore important to apply an adequate stopping criterion for the optimization process, so that the optimization continues at least until a reasonable fit is obtained, but not much longer. In the case of stochastic modeling, at least some parameter estimation schemes involve an objective function that is itself a random variable. This means that plain convergence tests are not a priori suitable as stopping criteria. This article suggests a termination criterion suited to optimization problems in parameter estimation arising from stochastic models in systems biology. The termination criterion is developed for optimization algorithms that involve populations of parameter sets, such as particle swarm or evolutionary algorithms. It is based on comparing the variance of the objective function over the whole population of parameter sets with the variance of repeated evaluations of the objective function at the best parameter set. The performance is demonstrated for several different algorithms. To test the termination criterion we choose polynomial test functions as well as systems biology models such as an Immigration-Death model and a bistable genetic toggle switch. The genetic toggle switch is an especially challenging test case as it shows a stochastic switching between two steady states which is qualitatively different from the model behavior in a deterministic model.
Lactate dehydrogenase as a selection criterion for ipilimumab treatment in metastatic melanoma
DEFF Research Database (Denmark)
Kelderman, Sander; Heemskerk, Bianca; van Tinteren, Harm;
2014-01-01
OS was 7.5 months, and OS at 1 year was 37.8 % and at 2 years was 22.9 %. In a multivariate model, baseline serum lactate dehydrogenase (LDH) was demonstrated to be the strongest predictive factor for OS. These findings were validated in an independent cohort of 64 patients from the UK. In both...... the NL and UK cohorts, long-term benefit of ipilimumab treatment was unlikely for patients with baseline serum LDH greater than twice the upper limit of normal. In the absence of prospective data, clinicians treating melanoma may wish to consider the data presented here to guide patient selection...
A New Ductile Fracture Criterion for Various Deformation Conditions Based on Microvoid Model
Institute of Scientific and Technical Information of China (English)
HUANG Jian-ke; DONG Xiang-huai
2009-01-01
To accurately predict the occurrence of ductile fracture in metal forming processes, the Gurson-Tvergaard (GT) porous material model with optimized adjustment parameters is adopted to analyze the macroscopic stress-strain response, and a practical void nucleation law is proposed with a few material constants for engineering applications. Mechanical and metallographie analyses of uniaxial tension, torsion and upsetting experiments are performed. According to the character of the metal forming processes, the basic mechanisms of ductile fracture are divided into two modes: tension-type mode and shear-type mode. A unified fracture criterion is proposed for wide applicable range, and the comparison of experimental results with numerical analysis results confirms the validity of the newly proposed ductile fracture criterion based on the GT porous material model.
Robustness: a new SLIP model based criterion for gait transitions in bipedal locomotion
Martinez Salazar, Harold Roberto; Carbajal, Juan Pablo; Ivanenko, Yuri P.
2014-01-01
Bipedal locomotion is a phenomenon that still eludes a fundamental and concise mathematical understanding. Conceptual models that capture some relevant aspects of the process exist but their full explanatory power is not yet exhausted. In the current study, we introduce the robustness criterion which defines the conditions for stable locomotion when steps are taken with imprecise angle of attack. Intuitively, the necessity of a higher precision indicates the difficulty to continue moving with...
A Criterion for Rating the Usability and Accuracy of the One-Diode Models for Photovoltaic Modules
Directory of Open Access Journals (Sweden)
Aldo Orioli
2016-06-01
Full Text Available In selecting a mathematical model for simulating physical behaviours, it is important to reach an acceptable compromise between analytical complexity and achievable precision. With the aim of helping researchers and designers working in the area of photovoltaic systems to make a choice among the numerous diode-based models, a criterion for rating both the usability and accuracy of one-diode models is proposed in this paper. A three-level rating scale, which considers the ease of finding the data used by the analytical procedure, the simplicity of the mathematical tools needed to perform calculations and the accuracy achieved in calculating the current and power, is used. The proposed criterion is tested on some one-diode equivalent circuits whose analytical procedures, hypotheses and equations are minutely reviewed along with the operative steps to calculate the model parameters. To assess the achievable accuracy, the current-voltage (I-V curves at constant solar irradiance and/or cell temperature obtained from the analysed models are compared to the characteristics issued by photovoltaic (PV panel manufacturers and the differences of current and power are calculated. The results of the study highlight that, even if the five parameter equivalent circuits are suitable tools, different usability ratings and accuracies can be observed.
Directory of Open Access Journals (Sweden)
Xinfeng Ruan
2013-01-01
Full Text Available We study option pricing with risk-minimization criterion in an incomplete market where the dynamics of the risky underlying asset are governed by a jump diffusion equation. We obtain the Radon-Nikodym derivative in the minimal martingale measure and a partial integrodifferential equation (PIDE of European call option. In a special case, we get the exact solution for European call option by Fourier transformation methods. Finally, we employ the pricing kernel to calculate the optimal portfolio selection by martingale methods.
Information criteria for astrophysical model selection
Liddle, A R
2007-01-01
Model selection is the problem of distinguishing competing models, perhaps featuring different numbers of parameters. The statistics literature contains two distinct sets of tools, those based on information theory such as the Akaike Information Criterion (AIC), and those on Bayesian inference such as the Bayesian evidence and Bayesian Information Criterion (BIC). The Deviance Information Criterion combines ideas from both heritages; it is readily computed from Monte Carlo posterior samples and, unlike the AIC and BIC, allows for parameter degeneracy. I describe the properties of the information criteria, and as an example compute them from WMAP3 data for several cosmological models. I find that at present the information theory and Bayesian approaches give significantly different conclusions from that data.
Experiments and modeling of ballistic penetration using an energy failure criterion
Directory of Open Access Journals (Sweden)
Dolinski M.
2015-01-01
Full Text Available One of the most intricate problems in terminal ballistics is the physics underlying penetration and perforation. Several penetration modes are well identified, such as petalling, plugging, spall failure and fragmentation (Sedgwick, 1968. In most cases, the final target failure will combine those modes. Some of the failure modes can be due to brittle material behavior, but penetration of ductile targets by blunt projectiles, involving plugging in particular, is caused by excessive localized plasticity, with emphasis on adiabatic shear banding (ASB. Among the theories regarding the onset of ASB, new evidence was recently brought by Rittel et al. (2006, according to whom shear bands initiate as a result of dynamic recrystallization (DRX, a local softening mechanism driven by the stored energy of cold work. As such, ASB formation results from microstructural transformations, rather than from thermal softening. In our previous work (Dolinski et al., 2010, a failure criterion based on plastic strain energy density was presented and applied to model four different classical examples of dynamic failure involving ASB formation. According to this criterion, a material point starts to fail when the total plastic strain energy density reaches a critical value. Thereafter, the strength of the element decreases gradually to zero to mimic the actual material mechanical behavior. The goal of this paper is to present a new combined experimental-numerical study of ballistic penetration and perforation, using the above-mentioned failure criterion. Careful experiments are carried out using a single combination of AISI 4340 FSP projectiles and 25[mm] thick RHA steel plates, while the impact velocity, and hence the imparted damage, are systematically varied. We show that our failure model, which includes only one adjustable parameter in this present work, can faithfully reproduce each of the experiments without any further adjustment. Moreover, it is shown that the
A new mixed-mode fracture criterion for large scale lattice models
Directory of Open Access Journals (Sweden)
T. Sachau
2013-08-01
Full Text Available Reasonable fracture criteria are crucial for the modeling of dynamic failure in computational spring lattice models. For experiments on the micro and on the meso scale exist successful criteria, which are based on the stress that a spring experiences. In this paper we test the applicability of these failure criteria to large scale models, where gravity plays an important role in addition to the externally applied deformation. The resulting brittle structures do not resemble the outcome predicted by fracture mechanics and geological observations. For this reason we derive an elliptical fracture criterion, which is based on the strain energy stored in a spring. Simulations using the new criterion result in realistic structures. It is another great advantage of this fracture model, that it can be combined with classic geological material parameters: the tensile strength σ0 and the shear cohesion τ0. While we tested the fracture model only for large scale structures, there is strong reason to believe that the model is equally applicable to lattice simulations on the micro and the meso scale.
A viscoelastic-plastic constitutive model with Mohr-Coulomb yielding criterion for sea ice dynamics
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
A new viscoelastic-plastic (VEP) constitutive model for sea ice dynamics was developed based on continuum mechanics. This model consists of four components: Kelvin-Vogit viscoelastic model, Mohr-Coulomb yielding criterion, associated normality flow rule for plastic rehololgy, and hydrostatic pressure. The numerical simulations for ice motion in an idealized rectangular basin were made using smoothed particle hydrodynamics (SPH) method, and compared with the analytical solution as well as those based on the modified viscous plastic(VP) model and static ice jam theory. These simulations show that the new VEP modelcan simulate ice dynamics accurately. The new constitutive model was further applied to simulate ice dynamics of the Bohai Sea and compared with the traditional VP, and modified VP models. The results of the VEP model are compared better with the satellite remote images, and the simulated ice conditions in the JZ20-2 oil platform area were more reasonable.
A Focused Bayesian Information Criterion
Directory of Open Access Journals (Sweden)
Georges Nguefack-Tsague
2014-01-01
Full Text Available Myriads of model selection criteria (Bayesian and frequentist have been proposed in the literature aiming at selecting a single model regardless of its intended use. An honorable exception in the frequentist perspective is the “focused information criterion” (FIC aiming at selecting a model based on the parameter of interest (focus. This paper takes the same view in the Bayesian context; that is, a model may be good for one estimand but bad for another. The proposed method exploits the Bayesian model averaging (BMA machinery to obtain a new criterion, the focused Bayesian model averaging (FoBMA, for which the best model is the one whose estimate is closest to the BMA estimate. In particular, for two models, this criterion reduces to the classical Bayesian model selection scheme of choosing the model with the highest posterior probability. The new method is applied in linear regression, logistic regression, and survival analysis. This criterion is specially important in epidemiological studies in which the objective is often to determine a risk factor (focus for a disease, adjusting for potential confounding factors.
Gervais, Olivier; Nirasawa, Keijiro; Vincenot, Christian E; Nagamine, Yoshitaka; Moriya, Kazuyuki
2017-02-01
Although non-destructive deformation is relevant for assessing eggshell strength, few long-term selection experiments are documented which use non-destructive deformation as a selection criterion. This study used restricted maximum likelihood-based methods with a four-trait animal model to analyze the effect of non-destructive deformation on egg production, egg weight and sexual maturity in a two-way selection experiment involving 17 generations of White Leghorns. In the strong shell line, corresponding to the line selected for low non-destructive deformation values, the heritability estimates were 0.496 for non-destructive deformation, 0.253 for egg production, 0.660 for egg weight and 0.446 for sexual maturity. In the weak shell line, corresponding to the line selected for high non-destructive deformation values, the heritabilities were 0.372, 0.162, 0.703 and 0.404, respectively. An asymmetric response to selection was observed for non-destructive deformation, egg production and sexual maturity, whereas egg weight decreased for both lines. Using non-destructive deformation to select for stronger eggshell had a small negative effect on egg production and sexual maturity, suggesting the need for breeding programs to balance selection between eggshell traits and egg production traits. However, the analysis of the genetic correlation between non-destructive deformation and egg weight revealed that large eggs are not associated with poor eggshell quality.
IT2 Fuzzy-Rough Sets and Max Relevance-Max Significance Criterion for Attribute Selection.
Maji, Pradipta; Garai, Partha
2015-08-01
One of the important problems in pattern recognition, machine learning, and data mining is the dimensionality reduction by attribute or feature selection. In this regard, this paper presents a feature selection method, based on interval type-2 (IT2) fuzzy-rough sets, where the features are selected by maximizing both relevance and significance of the features. By introducing the concept of lower and upper fuzzy equivalence partition matrices, the lower and upper relevance and significance of the features are defined for IT2 fuzzy approximation spaces. Different feature evaluation criteria such as dependency, relevance, and significance are presented for attribute selection task using IT2 fuzzy-rough sets. The performance of IT2 fuzzy-rough sets is compared with that of some existing feature evaluation indices including classical rough sets, neighborhood rough sets, and type-1 fuzzy-rough sets. The effectiveness of the proposed IT2 fuzzy-rough set-based attribute selection method, along with a comparison with existing feature selection and extraction methods, is demonstrated on several real-life data.
Phemister, Art W.
The purpose of this study was to evaluate the effectiveness of the Georgia's Choice reading curriculum on third grade science scores on the Georgia Criterion Referenced Competency Test from 2002 to 2008. In assessing the effectiveness of the Georgia's Choice curriculum model this causal comparative study examined the 105 elementary schools that implemented Georgia's Choice and 105 randomly selected elementary schools that did not elect to use Georgia's Choice. The Georgia's Choice reading program used intensified instruction in an effort to increase reading levels for all students. The study used a non-equivalent control group with a pretest and posttest design to determine the effectiveness of the Georgia's Choice curriculum model. Findings indicated that third grade students in Non-Georgia's Choice schools outscored third grade students in Georgia's Choice schools across the span of the study.
Directory of Open Access Journals (Sweden)
Chia-Lee Yang
2015-05-01
Full Text Available Disaster recovery sites are an important mechanism in continuous IT system operations. Such mechanisms can sustain IT availability and reduce business losses during natural or human-made disasters. Concerning the cost and risk aspects, the IT disaster-recovery site selection problems are multi-criterion decision making (MCDM problems in nature. For such problems, the decision aspects include the availability of the service, recovery time requirements, service performance, and more. The importance and complexities of IT disaster recovery sites increases with advances in IT and the categories of possible disasters. The modern IT disaster recovery site selection process requires further investigation. However, very few researchers tried to study related issues during past years based on the authors’ extremely limited knowledge. Thus, this paper aims to derive the aspects and criteria for evaluating and selecting a modern IT disaster recovery site. A hybrid MCDM framework consisting of the Decision Making Trial and Evaluation Laboratory (DEMATEL and the Analytic Network Process (ANP will be proposed to construct the complex influence relations between aspects as well as criteria and further, derive weight associated with each aspect and criteria. The criteria with higher weight can be used for evaluating and selecting the most suitable IT disaster recovery sites. In the future, the proposed analytic framework can be used for evaluating and selecting a disaster recovery site for data centers by public institutes or private firms.
Zhu, L; Carlin, B P
Bayes and empirical Bayes methods have proven effective in smoothing crude maps of disease risk, eliminating the instability of estimates in low-population areas while maintaining overall geographic trends and patterns. Recent work extends these methods to the analysis of areal data which are spatially misaligned, that is, involving variables (typically counts or rates) which are aggregated over differing sets of regional boundaries. The addition of a temporal aspect complicates matters further, since now the misalignment can arise either within a given time point, or across time points (as when the regional boundaries themselves evolve over time). Hierarchical Bayesian methods (implemented via modern Markov chain Monte Carlo computing methods) enable the fitting of such models, but a formal comparison of their fit is hampered by their large size and often improper prior specifications. In this paper, we accomplish this comparison using the deviance information criterion (DIC), a recently proposed generalization of the Akaike information criterion (AIC) designed for complex hierarchical model settings like ours. We investigate the use of the delta method for obtaining an approximate variance estimate for DIC, in order to attach significance to apparent differences between models. We illustrate our approach using a spatially misaligned data set relating a measure of traffic density to paediatric asthma hospitalizations in San Diego County, California.
Directory of Open Access Journals (Sweden)
Rodzkin Aleh I.
2015-01-01
Full Text Available Bioenergy production based on short rotation coppice willow plantations (SRC is an effective direction both for economic and environment profit. The yield of willow wood can amount to 10-15 tons per hectare of dry biomass per year and the cost of thus obtained energy is lower in comparison with other energy crops. In order to achieve high yield and profitability, the use of special willow clones is necessary. Species most often used in selection for biomass production are shrub type willows: Salix viminalis, Salix dasyclados and Salix schwerini, while the clones tested in this paper were also of tree species Salix alba. The productivity and some physiology characteristics of Serbian selection clones of Salix alba (Bačka, Volmianka and Drina and Swedish selection clone Jorr (Salix viminalis were investigated in greenhouses and in field conditions. As the result of testing three clones of Salix alba - Bačka, Volmianka and Drina, having special preferences and adaptability to different environmental conditions, these were included in State register of Republic of Belarus in 2013. In our experiment it was also satisfactory that specific properties of willows (intensity of transpiration and photosynthesis, water use efficiency and others, were conserved both in greenhouses and in field conditions. This factor gives opportunity to select prospective clones of willows at an early stage of ontogenesis for further testing.
Using the Correlation Criterion to Position and Shape RBF Units for Incremental Modelling
Institute of Scientific and Technical Information of China (English)
Xun-Xian Wang; Sheng Chen; Chris J. Harris
2006-01-01
A novel technique is proposed for the incremental construction of sparse radial basis function (RBF) networks.The correlation between an RBF regressor and the training data is used as the criterion to position and shape the RBF node, and it is shown that this is equivalent to incrementally minimise the modelling mean square error. A guided random search optimisation method, called the repeated weighted boosting search, is adopted to append RBF nodes one by one in an incremental regression modelling procedure. The experimental results obtained using the proposed method demonstrate that it provides a viable alternative to the existing state-of-the-art modelling techniques for constructing parsimonious RBF models that generalise well.
Half-trek criterion for generic identifiability of linear structural equation models
Foygel, Rina; Drton, Mathias
2011-01-01
A linear structural equation model relates random variables of interest and corresponding Gaussian noise terms via a linear equation system. Each such model can be represented by a mixed graph in which directed edges encode the linear equations, and bidirected edges indicate possible correlations among noise terms. We study parameter identifiability in these models, that is, we ask for conditions that ensure that the edge coefficients and correlations appearing in a linear structural equation model can be uniquely recovered from the covariance matrix of the associated normal distribution. We treat the case of generic identifiability, where unique recovery is possible for almost every choice of parameters. We give a new graphical criterion that is sufficient for generic identifiability. It improves criteria from prior work and does not require the directed part of the graph to be acyclic. We also develop a related necessary condition and examine the "gap" between sufficient and necessary conditions through sim...
Directory of Open Access Journals (Sweden)
Renaldas Vilkancas
2016-05-01
Full Text Available While using asymmetric risk-return measures an important role is played by selection of the investor's required or threshold rate of return. The scientific literature usually states that every investor should define this rate according to their degree of risk aversion. In this paper, it is attempted to look at the problem from a different perspective - empirical research is aimed at determining the impact of the threshold rate of return on the investment portfolio.
Directory of Open Access Journals (Sweden)
Erik Olofsen
2015-07-01
Full Text Available Akaike's information theoretic criterion for model discrimination (AIC is often stated to "overfit", i.e., it selects models with a higher dimension than the dimension of the model that generated the data. However, with experimental pharmacokinetic data it may not be possible to identify the correct model, because of the complexity of the processes governing drug disposition. Instead of trying to find the correct model, a more useful objective might be to minimize the prediction error of drug concentrations in subjects with unknown disposition characteristics. In that case, the AIC might be the selection criterion of choice. We performed Monte Carlo simulations using a model of pharmacokinetic data (a power function of time with the property that fits with common multi-exponential models can never be perfect - thus resembling the situation with real data. Prespecified models were fitted to simulated data sets, and AIC and AICc (the criterion with a correction for small sample sizes values were calculated and averaged. The average predictive performances of the models, quantified using simulated validation sets, were compared to the means of the AICs. The data for fits and validation consisted of 11 concentration measurements each obtained in 5 individuals, with three degrees of interindividual variability in the pharmacokinetic volume of distribution. Mean AICc corresponded very well, and better than mean AIC, with mean predictive performance. With increasing interindividual variability, there was a trend towards larger optimal models, but with respect to both lowest AICc and best predictive performance. Furthermore, it was observed that the mean square prediction error itself became less suitable as a validation criterion, and that a predictive performance measure should incorporate interindividual variability. This simulation study showed that, at least in a relatively simple mixed-effects modelling context with a set of prespecified models
Institute of Scientific and Technical Information of China (English)
LUO Jun-jun; ZHENG Jun-jie; SUN Ling; ZHANG Shi-biao
2008-01-01
Proper treatment of weak subgrade soil is very important to building a highway of good quality. We proposed an entropy-based multi-criterion group decision analysis method for a group of experts to evaluate alternatives of weak subgrade treatment, with an aim to select the optimum technique which is technically, economically and socially viable. We used fuzzy theory to analyze multiple experts' evaluation on various factors of each alterative treatment. Different experts' evaluations are integrated by the group eigenvalue method. An entropy weight is introduced to minimize the negative influences of subjective human factors of experts. The optimum alternative is identified with ideal point discriminant analysis to calculate the distance of each alternative to the ideal point and prioritize all alternatives according to their distances. A case study on a section of the Shiman Expressway verified that the proposed method can give a rational decision on the optimum method of weak subgrade treatment.
Directory of Open Access Journals (Sweden)
Antonio José Bermúdez
2009-11-01
Full Text Available Introduction: Surveillance of congenital anomalies receives importance in the world-wide context of eradicating the congenital rubella syndrome. Objective: To identify the congenital anomalies and to consider the low birth weight a criterion to test IgM for the complex TORCH. Methodology: Surveillance of the congenital specific and non specific anomalies of the congenital rubella syndrome(CRS and low birth weight in ten hospitals. It was considered as case, everything new born with some congenital anomaly or low weight for the gestational age. Serum tests for rubella, toxoplasmosis, citomegalovirus, herpes and parvovirus were practiced. For the negative cases cariotype was performed. Results: A total of 840 cases were selected, 669 by low weight for the gestational age, 52 by anomalies not related to CRS, 52; by anomalies that could be related to the CRS, 105. The most frequent anomalies were congenital heart diseases, 5.1%; hepatosplenomegalies, 3.9%; and microcephalies, 1.2%. There were confirmatory IgM titles for rubella in 0.5% of cases; toxoplasmosis 1.4%; citomegalovirus 1.5%; parvovirus 1.2%; herpes 0.5%; and positive test for congenital syphilis, 3.7%. In total there were 8.8% positive results for any congenital infectious disease. The relative risk for low birth weight having IgM positive rubella was RR = 2.83 (IC: 1.26:6.36-0.95. Discussion and conclusions: The surveillance for CRS, through the monitoring of febrile in the first year and by the presence of some specific congenital anomalies, could be improved in sensitivity by means of the routine monitoring of congenital anomalies, with the inclusion of low birth weight, like a selecting criterion to study the infectious agents.
Zimmermann, Johannes; Böhnke, Jan R; Eschstruth, Rhea; Mathews, Alessa; Wenzel, Kristin; Leising, Daniel
2015-08-01
The alternative model for the classification of personality disorders (PD) in the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5) Section III comprises 2 major components: impairments in personality functioning (Criterion A) and maladaptive personality traits (Criterion B). In this study, we investigated the latent structure of Criterion A (a) within subdomains, (b) across subdomains, and (c) in conjunction with the Criterion B trait facets. Data were gathered as part of an online study that collected other-ratings by 515 laypersons and 145 therapists. Laypersons were asked to assess 1 of their personal acquaintances, whereas therapists were asked to assess 1 of their patients, using 135 items that captured features of Criteria A and B. We were able to show that (a) the structure within the Criterion A subdomains can be appropriately modeled using generalized graded unfolding models, with results suggesting that the items are indeed related to common underlying constructs but often deviate from their theoretically expected severity level; (b) the structure across subdomains is broadly in line with a model comprising 2 strongly correlated factors of self- and interpersonal functioning, with some notable deviations from the theoretical model; and (c) the joint structure of the Criterion A subdomains and the Criterion B facets broadly resembles the expected model of 2 plus 5 factors, albeit the loading pattern suggests that the distinction between Criteria A and B is somewhat blurry. Our findings provide support for several major assumptions of the alternative DSM-5 model for PD but also highlight aspects of the model that need to be further refined. (c) 2015 APA, all rights reserved).
Directory of Open Access Journals (Sweden)
Key Christopher T.
2015-01-01
Full Text Available This study details and demonstrates a strain-based criterion for the prediction of polymer matrix composite material damage and failure under shock loading conditions. Shock loading conditions are characterized by high-speed impacts or explosive events that result in very high pressures in the materials involved. These material pressures can reach hundreds of kbar and often exceed the material strengths by several orders of magnitude. Researchers have shown that under these high pressures, composites exhibit significant increases in stiffness and strength. In this work we summarize modifications to a previous stress based interactive failure criterion based on the model initially proposed by Hashin, to include strain dependence. The failure criterion is combined with the multi-constituent composite constitutive model (MCM within a shock physics hydrocode. The constitutive model allows for decomposition of the composite stress and strain fields into the individual phase averaged constituent level stress and strain fields, which are then applied to the failure criterion. Numerical simulations of a metallic sphere impacting carbon/epoxy composite plates at velocities up to 1000 m/s are performed using both the stress and strain based criterion. These simulation results are compared to experimental tests to illustrate the advantages of a strain-based criterion in the shock environment.
Key, Christopher T.; Schumacher, Shane C.; Alexander, C. Scott
2015-09-01
This study details and demonstrates a strain-based criterion for the prediction of polymer matrix composite material damage and failure under shock loading conditions. Shock loading conditions are characterized by high-speed impacts or explosive events that result in very high pressures in the materials involved. These material pressures can reach hundreds of kbar and often exceed the material strengths by several orders of magnitude. Researchers have shown that under these high pressures, composites exhibit significant increases in stiffness and strength. In this work we summarize modifications to a previous stress based interactive failure criterion based on the model initially proposed by Hashin, to include strain dependence. The failure criterion is combined with the multi-constituent composite constitutive model (MCM) within a shock physics hydrocode. The constitutive model allows for decomposition of the composite stress and strain fields into the individual phase averaged constituent level stress and strain fields, which are then applied to the failure criterion. Numerical simulations of a metallic sphere impacting carbon/epoxy composite plates at velocities up to 1000 m/s are performed using both the stress and strain based criterion. These simulation results are compared to experimental tests to illustrate the advantages of a strain-based criterion in the shock environment.
Liu, Yi-Hung; Huang, Shiuan; Huang, Yi-De
2017-07-03
Motor imagery is based on the volitional modulation of sensorimotor rhythms (SMRs); however, the sensorimotor processes in patients with amyotrophic lateral sclerosis (ALS) are impaired, leading to degenerated motor imagery ability. Thus, motor imagery classification in ALS patients has been considered challenging in the brain-computer interface (BCI) community. In this study, we address this critical issue by introducing the Grassberger-Procaccia and Higuchi's methods to estimate the fractal dimensions (GPFD and HFD, respectively) of the electroencephalography (EEG) signals from ALS patients. Moreover, a Fisher's criterion-based channel selection strategy is proposed to automatically determine the best patient-dependent channel configuration from 30 EEG recording sites. An EEG data collection paradigm is designed to collect the EEG signal of resting state and the imagination of three movements, including right hand grasping (RH), left hand grasping (LH), and left foot stepping (LF). Five late-stage ALS patients without receiving any SMR training participated in this study. Experimental results show that the proposed GPFD feature is not only superior to the previously-used SMR features (mu and beta band powers of EEG from sensorimotor cortex) but also better than HFD. The accuracies achieved by the SMR features are not satisfactory (all lower than 80%) in all binary classification tasks, including RH imagery vs. resting, LH imagery vs. resting, and LF imagery vs. resting. For the discrimination between RH imagery and resting, the average accuracies of GPFD in 30-channel (without channel selection) and top-five-channel configurations are 95.25% and 93.50%, respectively. When using only one channel (the best channel among the 30), a high accuracy of 91.00% can still be achieved by the GPFD feature and a linear discriminant analysis (LDA) classifier. The results also demonstrate that the proposed Fisher's criterion-based channel selection is capable of removing a
Tolerance for ambiguity: an ethics-based criterion for medical student selection.
Geller, Gail
2013-05-01
Planned changes to the MCAT exam and the premedical course requirements are intended to enable the assessment of humanistic characteristics and, thus, to select students who are more likely to become physicians who can communicate and relate with patients and engage in ethical decision making. Identifying students who possess humanistic and communication skills is an important goal, but the changes being implemented may not be sufficient to evaluate key personality traits that characterize well-rounded, thoughtful, empathic, and respectful physicians. The author argues that consideration should be given to assessing prospective students' tolerance for ambiguity as part of the admission process. Several strategies are proposed for implementing and evaluating such an assessment. Also included in this paper is an overview of the conceptual and empirical literature on tolerance for ambiguity among physicians and medical students, its impact on patient care, and the attention it is given in medical education. This evidence suggests that if medical schools admitted students who possess a high tolerance for ambiguity, quality of care in ambiguous conditions might improve, imbalances in physician supply and practice patterns might be reduced, the humility necessary for moral character formation might be enhanced, and the increasing ambiguity in medical practice might be better acknowledged and accepted.
Shutova, Maria V; Surdina, Anastasia V; Ischenko, Dmitry S; Naumov, Vladimir A; Bogomazova, Alexandra N; Vassina, Ekaterina M; Alekseev, Dmitry G; Lagarkova, Maria A; Kiselev, Sergey L
2016-01-01
The pluripotency of newly developed human induced pluripotent stem cells (iPSCs) is usually characterized by physiological parameters; i.e., by their ability to maintain the undifferentiated state and to differentiate into derivatives of the 3 germ layers. Nevertheless, a molecular comparison of physiologically normal iPSCs to the "gold standard" of pluripotency, embryonic stem cells (ESCs), often reveals a set of genes with different expression and/or methylation patterns in iPSCs and ESCs. To evaluate the contribution of the reprogramming process, parental cell type, and fortuity in the signature of human iPSCs, we developed a complete isogenic reprogramming system. We performed a genome-wide comparison of the transcriptome and the methylome of human isogenic ESCs, 3 types of ESC-derived somatic cells (fibroblasts, retinal pigment epithelium and neural cells), and 3 pairs of iPSC lines derived from these somatic cells. Our analysis revealed a high input of stochasticity in the iPSC signature that does not retain specific traces of the parental cell type and reprogramming process. We showed that 5 iPSC clones are sufficient to find with 95% confidence at least one iPSC clone indistinguishable from their hypothetical isogenic ESC line. Additionally, on the basis of a small set of genes that are characteristic of all iPSC lines and isogenic ESCs, we formulated an approach of "the best iPSC line" selection and confirmed it on an independent dataset.
Westgate, Philip M
2014-05-01
Generalized estimating equations (GEE) are commonly used for the marginal analysis of correlated data, although the quadratic inference function (QIF) approach is an alternative that is increasing in popularity. This method optimally combines distinct sets of unbiased estimating equations that are based upon a working correlation structure, therefore asymptotically increasing or maintaining estimation efficiency relative to GEE. However, in finite samples, additional estimation variability arises when combining these sets of estimating equations, and therefore the QIF approach is not guaranteed to work as well as GEE. Furthermore, estimation efficiency can be improved for both analysis methods by accurate modeling of the correlation structure. Our goal is to improve parameter estimation, relative to existing methods, by simultaneously selecting a working correlation structure and choosing between GEE and two versions of the QIF approach. To do this, we propose the use of a criterion based upon the trace of the empirical covariance matrix (TECM). To make GEE and both QIF versions directly comparable for any given working correlation structure, the proposed TECM utilizes a penalty to account for the finite-sample variance inflation that can occur with either version of the QIF approach. Via a simulation study and in application to a longitudinal study, we show that penalizing the variance inflation that occurs with the QIF approach is necessary and that the proposed criterion works very well. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Probability Criterion for a Dynamic Financial Model with Short-Selling Allowed
Institute of Scientific and Technical Information of China (English)
韩其恒; 唐万生; 李光泉
2003-01-01
Probability criterion has its practical significance, and its investment decision-making is determined by the expected discounted wealth. In a complete, standard financial market with short-selling allowed, this paper probes into the investment decision-making with probability criterion. The upper limit of criterion function is obtained. The corresponding discounted wealth process and hedging portfolio process are provided. Finally, an illustrative example of one-dimensional constant-coefficient financial market is given.
Shi, Ming; Shen, Weiming; Wang, Hong-Qiang; Chong, Yanwen
2016-12-01
Inferring gene regulatory networks (GRNs) from microarray expression data are an important but challenging issue in systems biology. In this study, the authors propose a Bayesian information criterion (BIC)-guided sparse regression approach for GRN reconstruction. This approach can adaptively model GRNs by optimising the l1-norm regularisation of sparse regression based on a modified version of BIC. The use of the regularisation strategy ensures the inferred GRNs to be as sparse as natural, while the modified BIC allows incorporating prior knowledge on expression regulation and thus avoids the overestimation of expression regulators as usual. Especially, the proposed method provides a clear interpretation of combinatorial regulations of gene expression by optimally extracting regulation coordination for a given target gene. Experimental results on both simulation data and real-world microarray data demonstrate the competent performance of discovering regulatory relationships in GRN reconstruction.
Quan, Tingwei; Zhu, Hongyu; Liu, Xiaomao; Liu, Yongfeng; Ding, Jiuping; Zeng, Shaoqun; Huang, Zhen-Li
2011-08-29
Localization-based super-resolution microscopy (or called localization microscopy) rely on repeated imaging and localization of active molecules, and the spatial resolution enhancement of localization microscopy is built upon the sacrifice of its temporal resolution. Developing algorithms for high-density localization of active molecules is a promising approach to increase the speed of localization microscopy. Here we present a new algorithm called SSM_BIC for such purpose. The SSM_BIC combines the advantages of the Structured Sparse Model (SSM) and the Bayesian Information Criterion (BIC). Through simulation and experimental studies, we evaluate systematically the performance between the SSM_BIC and the conventional Sparse algorithm in high-density localization of active molecules. We show that the SSM_BIC is superior in processing single molecule images with weak signal embedded in strong background.
MODEL SELECTION FOR LOG-LINEAR MODELS OF CONTINGENCY TABLES
Institute of Scientific and Technical Information of China (English)
ZHAO Lincheng; ZHANG Hong
2003-01-01
In this paper, we propose an information-theoretic-criterion-based model selection procedure for log-linear model of contingency tables under multinomial sampling, and establish the strong consistency of the method under some mild conditions. An exponential bound of miss detection probability is also obtained. The selection procedure is modified so that it can be used in practice. Simulation shows that the modified method is valid. To avoid selecting the penalty coefficient in the information criteria, an alternative selection procedure is given.
Appropriate model selection methods for nonstationary generalized extreme value models
Kim, Hanbeen; Kim, Sooyoung; Shin, Hongjoon; Heo, Jun-Haeng
2017-04-01
Several evidences of hydrologic data series being nonstationary in nature have been found to date. This has resulted in the conduct of many studies in the area of nonstationary frequency analysis. Nonstationary probability distribution models involve parameters that vary over time. Therefore, it is not a straightforward process to apply conventional goodness-of-fit tests to the selection of an appropriate nonstationary probability distribution model. Tests that are generally recommended for such a selection include the Akaike's information criterion (AIC), corrected Akaike's information criterion (AICc), Bayesian information criterion (BIC), and likelihood ratio test (LRT). In this study, the Monte Carlo simulation was performed to compare the performances of these four tests, with regard to nonstationary as well as stationary generalized extreme value (GEV) distributions. Proper model selection ratios and sample sizes were taken into account to evaluate the performances of all the four tests. The BIC demonstrated the best performance with regard to stationary GEV models. In case of nonstationary GEV models, the AIC proved to be better than the other three methods, when relatively small sample sizes were considered. With larger sample sizes, the AIC, BIC, and LRT presented the best performances for GEV models which have nonstationary location and/or scale parameters, respectively. Simulation results were then evaluated by applying all four tests to annual maximum rainfall data of selected sites, as observed by the Korea Meteorological Administration.
Primary safe criterion of earth-brushing flight for flying vehicle over digital surface model
Institute of Scientific and Technical Information of China (English)
赵敏; 林行刚; 赵乃国
2004-01-01
In modem terrain-following guidance it is an important index for flight vehicle to cruise about safely and normally. On the basis of a constructing method of digital surface model (DSM), the definition, classification and scale analysis of an isolated obstacle threatening flight safety of terrain-following guidance are made. When the interval of verticaland cross-sections on DSM is 12.5 m, the proportion of isolated obstacles to the data amount of DSM model to be loaded is optimal. The main factors influencing the lowest flying height in terrain-following guidance are analyzed, and a primary safe criterion of the lowest flying height over DSM model is proposed. According to their test errors, the lowest flying height over 1:10 000 DSM model can reach 40.5 m～45.0 m in terrain-following guidance. It is shown from the simulation results of a typical urban district that the proposed models and methods are reasonable and feasible.
Chirikov criterion of resonance overlapping for the model of molecular dynamics
Guzev, M A
2012-01-01
The chaotic dynamics in a cell of particles' chain interacting by means of Lennard-Jones potential is considered. Chirikov criterion of resonance over- lapping is used as the condition of chaos. The asymptotic representation for this function at low and high energies is obtained for the function corresponding to the criterion.
Directory of Open Access Journals (Sweden)
Faris M. AL-Oqla
2014-11-01
Full Text Available A systematic evaluation tool for natural fibers’ capabilities based on moisture content criterion (MCC was developed and introduced as a new evaluation method. This MCC evaluation tool is designed to predict the behavior of the available natural fibers regarding distinctive desirable characteristics under the effect of the moisture absorption phenomenon. Here, the capabilities of different natural fiber types commonly used in industry, in addition to date palm fibers, were systematically investigated based on MCC. The results demonstrated that MCC is capable of predicting the relative reduction of fiber performance regarding a particular beneficial property because of the effect of moisture absorption. The strong agreements between the predicted values of MCC and results reported in the literature verify its usefulness as an evaluation tool and demonstrate its added value steps in predicting the relative behavior of fibers with a minimal range of errors compared with experimental measurements. Therefore, MCC is capable of better evaluating natural fibers regarding distinctive criteria in a systematic manner, leading to more realistic decisions about their capabilities and therefore enhancing the selection process for both better sustainable design possibilities and industrial product development.
Henry de-Graft Acquah; Joseph Acquah
2013-01-01
Alternative formulations of the Bayesian Information Criteria provide a basis for choosing between competing methods for detecting price asymmetry. However, very little is understood about their performance in the asymmetric price transmission modelling framework. In addressing this issue, this paper introduces and applies parametric bootstrap techniques to evaluate the ability of Bayesian Information Criteria (BIC) and Draper's Information Criteria (DIC) in discriminating between alternative...
Directory of Open Access Journals (Sweden)
Qingwen Li
2015-01-01
Full Text Available In the tunnel and underground space engineering, the blasting wave will attenuate from shock wave to stress wave to elastic seismic wave in the host rock. Also, the host rock will form crushed zone, fractured zone, and elastic seismic zone under the blasting loading and waves. In this paper, an accurate mathematical dynamic loading model was built. And the crushed zone as well as fractured zone was considered as the blasting vibration source thus deducting the partial energy for cutting host rock. So this complicated dynamic problem of segmented differential blasting was regarded as an equivalent elastic boundary problem by taking advantage of Saint-Venant’s Theorem. At last, a 3D model in finite element software FLAC3D accepted the constitutive parameters, uniformly distributed mutative loading, and the cylindrical attenuation law to predict the velocity curves and effective tensile curves for calculating safety criterion formulas of surrounding rock and tunnel liner after verifying well with the in situ monitoring data.
Directory of Open Access Journals (Sweden)
Lauri Kalle Tapio Uotinen
2017-01-01
Full Text Available An in situ concrete spalling experiment will be carried out in the ONKALO rock characterization facility. The purpose is to establish the failure strength of a thin concrete liner on prestressed rock surface, when the stress states in both rock and concrete are increased by heating. A cylindrical hole 1.5 m in diameter and 7.2 m in depth is reinforced with a 40 mm thin concrete liner from level −3 m down. Eight 6 m long 4 kW electrical heaters are installed around the hole 1 m away. The experiment setup is described and results from predictive numerical modelling are shown. Elastoplastic modelling using the Ottosen failure criterion predicts damage initiation on week 5 and the concrete ultimate strain limit of 0.0035 is exceeded on week 10. The support pressure generated by the liner is 3.2 MPa and the tangential stress of rock is reduced by −33%. In 2D fracture mechanical simulations, the support pressure is 3 MPa and small localized damage occurs after week 3 and damage process slowly continues during week 9 of the heating period. In conclusion, external heating is a potent way of inducing damage and thin concrete liner significantly reduces the amount of damage.
APPLYING THE EFQM EXCELLENCE MODEL AT THE GERMAN STUDY LINE WITH FOCUS ON THE CRITERION
Directory of Open Access Journals (Sweden)
ILIES LIVIU
2013-07-01
Full Text Available This article presents a stage of the implementation process of the EFQM Model in a higher education institution, namely at the German study line within the Faculty of Economics and Business Administration, “Babeș - Bolyai” University, Cluj –Napoca. Actually, designing this model for the higher education sector means highlighting the basis for the implementation of a Total Quality Management model, seen as a holistic dimension for the perception of quality in an organization. By means of the EFQM method, the authors try to identify the performance degree of the criterion ,,Customer Results”, related to the students’ satisfaction level. The students are seen as primary customers of the higher education sector and have an essential role in defining the quality dimensions. On the one hand, the customers of the higher education sector can surface the status quo of the quality in the institution and on the other hand they can improve the quality. Actually, the continuous improvement of quality is highly linked to performance. From this point of view, the European Foundation for Quality Management model is a practical tool in order to support the analysis of the opportunities within higher education institutions. Therefore, this model offers a customer focused approach, because many higher education institutions consider the students to be the heart of teaching and researching. Further, the fundamental concepts are defined and the focus is pointed in the direction of customer approach, which highlight the idea that excellence is creating added value for customers. Anticipating and identifying the current and the future needs of the students by developing a balanced range of relevant dimensions and indicators means taking an appropriate action based on the holistic view of quality in an organization. Focusing and understanding students’ and other customers’ requirements, their needs and expectations, follows the idea that performance can
Distribution of the LR criterion Up,m,n as a marginal distribution of a generalized Dirichlet model
Directory of Open Access Journals (Sweden)
Seemon Thomas
2013-05-01
Full Text Available The density of the likelihood ratio criterion Up,m,n is expressed in terms of a marginal density of a generalized Dirichlet model having a specific set of parameters. The exact distribution of the likelihood ratio criterion so obtained has a very simple and general format for every p . It provides an easy and direct method of computation of the exact p -value of Up,m,n . Various types of properties and relations involving hypergeometric series are also established.
Bayesian model selection for incomplete data using the posterior predictive distribution.
Daniels, Michael J; Chatterjee, Arkendu S; Wang, Chenguang
2012-12-01
We explore the use of a posterior predictive loss criterion for model selection for incomplete longitudinal data. We begin by identifying a property that most model selection criteria for incomplete data should consider. We then show that a straightforward extension of the Gelfand and Ghosh (1998, Biometrika, 85, 1-11) criterion to incomplete data has two problems. First, it introduces an extra term (in addition to the goodness of fit and penalty terms) that compromises the criterion. Second, it does not satisfy the aforementioned property. We propose an alternative and explore its properties via simulations and on a real dataset and compare it to the deviance information criterion (DIC). In general, the DIC outperforms the posterior predictive criterion, but the latter criterion appears to work well overall and is very easy to compute unlike the DIC in certain classes of models for missing data.
Livingstone, Holly A.; Day, Arla L.
2005-01-01
Despite the popularity of the concept of emotional intelligence(EI), there is much controversy around its definition, measurement, and validity. Therefore, the authors examined the construct and criterion-related validity of an ability-based EI measure (Mayer Salovey Caruso Emotional Intelligence Test [MSCEIT]) and a mixed-model EI measure…
Directory of Open Access Journals (Sweden)
Luo Arong
2010-08-01
Full Text Available Abstract Background Explicit evolutionary models are required in maximum-likelihood and Bayesian inference, the two methods that are overwhelmingly used in phylogenetic studies of DNA sequence data. Appropriate selection of nucleotide substitution models is important because the use of incorrect models can mislead phylogenetic inference. To better understand the performance of different model-selection criteria, we used 33,600 simulated data sets to analyse the accuracy, precision, dissimilarity, and biases of the hierarchical likelihood-ratio test, Akaike information criterion, Bayesian information criterion, and decision theory. Results We demonstrate that the Bayesian information criterion and decision theory are the most appropriate model-selection criteria because of their high accuracy and precision. Our results also indicate that in some situations different models are selected by different criteria for the same dataset. Such dissimilarity was the highest between the hierarchical likelihood-ratio test and Akaike information criterion, and lowest between the Bayesian information criterion and decision theory. The hierarchical likelihood-ratio test performed poorly when the true model included a proportion of invariable sites, while the Bayesian information criterion and decision theory generally exhibited similar performance to each other. Conclusions Our results indicate that the Bayesian information criterion and decision theory should be preferred for model selection. Together with model-adequacy tests, accurate model selection will serve to improve the reliability of phylogenetic inference and related analyses.
2006-05-01
interests include feature selection, statistical learning, multivariate statistics, market research, and classification. He may be contacted at...current youth market , and reducing barriers to Army enlistment. Part of the Army Recruiting Initiatives was the creation of a recruiter selection...Selection Model DevelPed by the Openuier Reseach Crate of E...lneSstm Erapseeeng Depce-teo, WViitd Ntt. siliec Academy, NW..t Point, 271 Weau/’itt 21M
Sveiczer Akos; Buchwald Peter
2006-01-01
Abstract Background There is considerable controversy concerning the exact growth profile of size parameters during the cell cycle. Linear, exponential and bilinear models are commonly considered, and the same model may not apply for all species. Selection of the most adequate model to describe a given data-set requires the use of quantitative model selection criteria, such as the partial (sequential) F-test, the Akaike information criterion and the Schwarz Bayesian information criterion, whi...
Parametric or nonparametric? A parametricness index for model selection
Liu, Wei; 10.1214/11-AOS899
2012-01-01
In model selection literature, two classes of criteria perform well asymptotically in different situations: Bayesian information criterion (BIC) (as a representative) is consistent in selection when the true model is finite dimensional (parametric scenario); Akaike's information criterion (AIC) performs well in an asymptotic efficiency when the true model is infinite dimensional (nonparametric scenario). But there is little work that addresses if it is possible and how to detect the situation that a specific model selection problem is in. In this work, we differentiate the two scenarios theoretically under some conditions. We develop a measure, parametricness index (PI), to assess whether a model selected by a potentially consistent procedure can be practically treated as the true model, which also hints on AIC or BIC is better suited for the data for the goal of estimating the regression function. A consequence is that by switching between AIC and BIC based on the PI, the resulting regression estimator is si...
DEFF Research Database (Denmark)
2014-01-01
A method of operating an electronic device includes providing a plurality of antenna elements, evaluating a wireless communication performance criterion to obtain a performance evaluation, and assigning a first one of the plurality of antenna elements to a main wireless signal reception...
Parada, N. D. J. (Principal Investigator); Dutra, L. V.; Mascarenhas, N. D. A.; Mitsuo, Fernando Augusta, II
1984-01-01
A study area near Ribeirao Preto in Sao Paulo state was selected, with predominance in sugar cane. Eight features were extracted from the 4 original bands of LANDSAT image, using low-pass and high-pass filtering to obtain spatial features. There were 5 training sites in order to acquire the necessary parameters. Two groups of four channels were selected from 12 channels using JM-distance and entropy criterions. The number of selected channels was defined by physical restrictions of the image analyzer and computacional costs. The evaluation was performed by extracting the confusion matrix for training and tests areas, with a maximum likelihood classifier, and by defining performance indexes based on those matrixes for each group of channels. Results show that in spatial features and supervised classification, the entropy criterion is better in the sense that allows a more accurate and generalized definition of class signature. On the other hand, JM-distance criterion strongly reduces the misclassification within training areas.
Modeling Portfolio Optimization Problem by Probability-Credibility Equilibrium Risk Criterion
Directory of Open Access Journals (Sweden)
Ye Wang
2016-01-01
Full Text Available This paper studies the portfolio selection problem in hybrid uncertain decision systems. Firstly the return rates are characterized by random fuzzy variables. The objective is to maximize the total expected return rate. For a random fuzzy variable, this paper defines a new equilibrium risk value (ERV with credibility level beta and probability level alpha. As a result, our portfolio problem is built as a new random fuzzy expected value (EV model subject to ERV constraint, which is referred to as EV-ERV model. Under mild assumptions, the proposed EV-ERV model is a convex programming problem. Furthermore, when the possibility distributions are triangular, trapezoidal, and normal, the EV-ERV model can be transformed into its equivalent deterministic convex programming models, which can be solved by general purpose optimization software. To demonstrate the effectiveness of the proposed equilibrium optimization method, some numerical experiments are conducted. The computational results and comparison study demonstrate that the developed equilibrium optimization method is effective to model portfolio selection optimization problem with twofold uncertain return rates.
Information-theoretic model selection applied to supernovae data
Biesiada, M
2007-01-01
There are several different theoretical ideas invoked to explain the dark energy with relatively little guidance of which one of them might be right. Therefore the emphasis of ongoing and forthcoming research in this field shifts from estimating specific parameters of cosmological model to the model selection. In this paper we apply information-theoretic model selection approach based on Akaike criterion as an estimator of Kullback-Leibler entropy. In particular, we present the proper way of ranking the competing models based on Akaike weights (in Bayesian language - posterior probabilities of the models). Out of many particular models of dark energy we focus on four: quintessence, quintessence with time varying equation of state, brane-world and generalized Chaplygin gas model and test them on Riess' Gold sample. As a result we obtain that the best model - in terms of Akaike Criterion - is the quintessence model. The odds suggest that although there exist differences in the support given to specific scenario...
A simple application of FIC to model selection
Wiggins, Paul A
2015-01-01
We have recently proposed a new information-based approach to model selection, the Frequentist Information Criterion (FIC), that reconciles information-based and frequentist inference. The purpose of this current paper is to provide a simple example of the application of this criterion and a demonstration of the natural emergence of model complexities with both AIC-like ($N^0$) and BIC-like ($\\log N$) scaling with observation number $N$. The application developed is deliberately simplified to make the analysis analytically tractable.
On Model Selection Criteria in Multimodel Analysis
Energy Technology Data Exchange (ETDEWEB)
Ye, Ming; Meyer, Philip D.; Neuman, Shlomo P.
2008-03-21
Hydrologic systems are open and complex, rendering them prone to multiple conceptualizations and mathematical descriptions. There has been a growing tendency to postulate several alternative hydrologic models for a site and use model selection criteria to (a) rank these models, (b) eliminate some of them and/or (c) weigh and average predictions and statistics generated by multiple models. This has led to some debate among hydrogeologists about the merits and demerits of common model selection (also known as model discrimination or information) criteria such as AIC [Akaike, 1974], AICc [Hurvich and Tsai, 1989], BIC [Schwartz, 1978] and KIC [Kashyap, 1982] and some lack of clarity about the proper interpretation and mathematical representation of each criterion. In particular, whereas we [Neuman, 2003; Ye et al., 2004, 2005; Meyer et al., 2007] have based our approach to multimodel hydrologic ranking and inference on the Bayesian criterion KIC (which reduces asymptotically to BIC), Poeter and Anderson [2005] and Poeter and Hill [2007] have voiced a preference for the information-theoretic criterion AICc (which reduces asymptotically to AIC). Their preference stems in part from a perception that KIC and BIC require a "true" or "quasi-true" model to be in the set of alternatives while AIC and AICc are free of such an unreasonable requirement. We examine the model selection literature to find that (a) all published rigorous derivations of AIC and AICc require that the (true) model having generated the observational data be in the set of candidate models; (b) though BIC and KIC were originally derived by assuming that such a model is in the set, BIC has been rederived by Cavanaugh and Neath [1999] without the need for such an assumption; (c) KIC reduces to BIC as the number of observations becomes large relative to the number of adjustable model parameters, implying that it likewise does not require the existence of a true model in the set of alternatives; (d) if a true
Model selection for the extraction of movement primitives.
Endres, Dominik M; Chiovetto, Enrico; Giese, Martin A
2013-01-01
A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA), independent component analysis (ICA), anechoic demixing, and the time-varying synergy model (d'Avella and Tresch, 2002). However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model. We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria [Bayesian information criterion, BIC (Schwarz, 1978) and the Akaike Information Criterion (AIC) (Akaike, 1974)]. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.
Model selection for the extraction of movement primitives
Directory of Open Access Journals (Sweden)
Dominik M Endres
2013-12-01
Full Text Available A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA,independent component analysis (ICA, anechoic demixing, and the time-varying synergy model. However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model.We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria (Bayesian information criterion, BIC and the Akaike Information Criterion (AIC. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.
A new class of indicators for the model selection of scaling laws in nuclear fusion
Lupelli, I; Gaudio, P; Gelfusa, M; Mazon, D; Vega, J
2013-01-01
The development of computationally efficient model selection strategies represents an important problem facing the analysis of Nuclear Fusion experimental data, in particular in the field of scaling laws for the extrapolation to future machines, and image processing. In this paper, a new model selection indicator, named Model Falsification Criterion (MFC), will be presented and applied to the problem of choosing the most generalizable scaling laws for the power threshold to access the H-mode of confinement in Tokamaks. The proposed indicator is based on the properties of the model residuals, their entropy and an implementation of the data falsification principle. The model selection ability of the proposed criterion will be demonstrated in comparison with the most widely used frequentist (Akaike Information Criterion) and bayesian (Bayesian Information Criterion) indicators.
Mendoza, Héctor; Carmona, Laura; Assunção, Patricia; Freijanes, Karen; de la Jara, Adelina; Portillo, Eduardo; Torres, Alicia
2015-12-01
The lipid extractability of 14 microalgae species and strains was assessed using organic solvents (methanol and chloroform). The high variability detected indicated the potential for applying this parameter as an additional criterion for microalgae screening in industrial processes such as biofuel production from microalgae. Species without cell walls presented higher extractability than species with cell walls. Analysis of cell integrity by flow cytometry and staining with propidium iodide showed a significant correlation between higher resistance to the physical treatments of cell rupture by sonication and the lipid extractability of the microalgae. The results highlight the cell wall as a determining factor in the inter- and intraspecific variability in lipid extraction treatments. Copyright © 2015. Published by Elsevier Ltd.
Bosin, V Iu; Murvanidze, D D; Sturua, D G; Nabokov, A K; Soloshenko, V N
1989-01-01
The anatomic parameters of the kidneys and the rate of glomerular filtration were measured in 77 children with unilateral hydronephrosis and in 27 children with nonobstructive diseases of the urinary tract according to the clearance of an opaque medium during excretory urography. Alterations in the anatomic parameters of the kidneys in obstructive affection did not reflect the gravity of functional disorders. It has been established that there is a possibility of carrying out a separate assessment of filtration function of the hydronephrotic and contralateral kidneys. A new diagnostic criterion is offered, namely an index of relative clearance, which enables one to measure the degree of compensatory phenomena in the preserved glomeruli and the extent of sclerotic process. It has been demonstrated that accurate measurement of the functional parameters of the affected kidney should underlie the treatment choice in children with unilateral hydronephrosis.
Lea, J.; Mair, D.; Rea, B.; Nick, F.; Schofield, E.
2012-04-01
The ability to successfully model the behaviour of Greenland tidewater glaciers is pivotal to understanding the controls on their dynamics and potential impact on global sea level. However, to have confidence in the results of numerical models in this setting, the evidence required for robust verification must extend well beyond the existing instrumental record. Perhaps uniquely for a major Greenland outlet glacier, both the advance and retreat dynamics of Kangiata Nunata Sermia (KNS), Nuuk Fjord, SW Greenland over the last ~1000 years can be reasonably constrained through a combination of geomorphological, sedimentological and archaeological evidence. It is therefore an ideal location to test the ability of the latest generation of calving criterion based tidewater models to explain millennial scale dynamics. This poster presents geomorphological evidence recording the post-Little Ice Age maximum dynamics of KNS, derived from high-resolution satellite imagery. This includes evidence of annual retreat moraine complexes suggesting controlled rather than catastrophic retreat between pinning points, in addition to a series of ice dammed lake shorelines, allowing detailed interpretation of the dynamics of the glacier as it thinned and retreated. Pending ground truthing, this evidence will contribute towards the calibration of results obtained from a calving criterion numerical model (Nick et al, 2010), driven by an air temperature reconstruction for the KNS region determined from ice core data.
DEFF Research Database (Denmark)
Thordarson, Fannar Ørn; Breinholt, Anders; Møller, Jan Kloppenborg
2012-01-01
coverage of the prediction intervals, i.e. the bias between these coverages should ideally be zero. The sharpness is a measure of the distance between the lower and upper prediction limits, and skill score criterion makes it possible to pinpoint the preferred model by taking into account both reliability...... term and a diffusion term, respectively accounting for the deterministic and stochastic part of the models. Furthermore, a distinction is made between the process noise and the observation noise. We compare five different model candidates’ predictive performances that solely differ with respect...... to the diffusion term description up to a 4 h prediction horizon by adopting the prediction performance measures; reliability, sharpness and skill score to pinpoint the preferred model. The prediction performance of a model is reliable if the observed coverage of the prediction intervals corresponds to the nominal...
Post-model selection inference and model averaging
Directory of Open Access Journals (Sweden)
Georges Nguefack-Tsague
2011-07-01
Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.
Efficiency of model selection criteria in flood frequency analysis
Calenda, G.; Volpi, E.
2009-04-01
The estimation of high flood quantiles requires the extrapolation of the probability distributions far beyond the usual sample length, involving high estimation uncertainties. The choice of the probability law, traditionally based on the hypothesis testing, is critical to this point. In this study the efficiency of different model selection criteria, seldom applied in flood frequency analysis, is investigated. The efficiency of each criterion in identifying the probability distribution of the hydrological extremes is evaluated by numerical simulations for different parent distributions, coefficients of variation and skewness, and sample sizes. The compared model selection procedures are the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), the Anderson Darling Criterion (ADC) recently discussed by Di Baldassarre et al. (2008) and Sample Quantile Criterion (SQC), recently proposed by the authors (Calenda et al., 2009). The SQC is based on the principle of maximising the probability density of the elements of the sample that are considered relevant to the problem, and takes into account both the accuracy and the uncertainty of the estimate. Since the stress is mainly on extreme events, the SQC involves upper-tail probabilities, where the effect of the model assumption is more critical. The proposed index is equal to the sum of logarithms of the inverse of the sample probability density of the observed quantiles. The definition of this index is based on the principle that the more centred is the sample value in respect to its density distribution (accuracy of the estimate) and the less spread is this distribution (uncertainty of the estimate), the greater is the probability density of the sample quantile. Thus, lower values of the index indicate a better performance of the distribution law. This criterion can operate the selection of the optimum distribution among competing probability models that are estimated using different samples. The
Improving randomness characterization through Bayesian model selection
R., Rafael Díaz-H; Martínez, Alí M Angulo; U'Ren, Alfred B; Hirsch, Jorge G; Marsili, Matteo; Castillo, Isaac Pérez
2016-01-01
Nowadays random number generation plays an essential role in technology with important applications in areas ranging from cryptography, which lies at the core of current communication protocols, to Monte Carlo methods, and other probabilistic algorithms. In this context, a crucial scientific endeavour is to develop effective methods that allow the characterization of random number generators. However, commonly employed methods either lack formality (e.g. the NIST test suite), or are inapplicable in principle (e.g. the characterization derived from the Algorithmic Theory of Information (ATI)). In this letter we present a novel method based on Bayesian model selection, which is both rigorous and effective, for characterizing randomness in a bit sequence. We derive analytic expressions for a model's likelihood which is then used to compute its posterior probability distribution. Our method proves to be more rigorous than NIST's suite and the Borel-Normality criterion and its implementation is straightforward. We...
Selection Criteria in Regime Switching Conditional Volatility Models
Directory of Open Access Journals (Sweden)
Thomas Chuffart
2015-05-01
Full Text Available A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.
Complexity regularized hydrological model selection
Pande, S.; Arkesteijn, L.; Bastidas, L.A.
2014-01-01
This paper uses a recently proposed measure of hydrological model complexity in a model selection exercise. It demonstrates that a robust hydrological model is selected by penalizing model complexity while maximizing a model performance measure. This especially holds when limited data is available.
Complexity regularized hydrological model selection
Pande, S.; Arkesteijn, L.; Bastidas, L.A.
2014-01-01
This paper uses a recently proposed measure of hydrological model complexity in a model selection exercise. It demonstrates that a robust hydrological model is selected by penalizing model complexity while maximizing a model performance measure. This especially holds when limited data is available.
Individual Influence on Model Selection
Sterba, Sonya K.; Pek, Jolynn
2012-01-01
Researchers in psychology are increasingly using model selection strategies to decide among competing models, rather than evaluating the fit of a given model in isolation. However, such interest in model selection outpaces an awareness that one or a few cases can have disproportionate impact on the model ranking. Though case influence on the fit…
The Optimal Selection for Restricted Linear Models with Average Estimator
Directory of Open Access Journals (Sweden)
Qichang Xie
2014-01-01
Full Text Available The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing a k-class generalized information criterion (k-GIC, which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.
A new class of indicators for the model selection of scaling laws in nuclear fusion
Energy Technology Data Exchange (ETDEWEB)
Lupelli, I., E-mail: Ivan.Lupelli@ccfe.ac.uk [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); Murari, A. [Consorzio RFX-Associazione EURATOM ENEA per la Fusione, I-35127 Padova (Italy); Gaudio, P.; Gelfusa, M. [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Mazon, D. [Association EURATOM-CEA, CEA Cadarache DSM/IRFM, 13108 Saint-Paul-lez-Durance (France); Vega, J. [Asociación EURATOM-CIEMAT para Fusión, CIEMAT, Madrid (Spain)
2013-10-15
Highlights: ► A new model selection indicator, based on the Model Falsification Criterion, has been applied to the problem of choosing the scaling laws for power threshold scaling to access the H-mode in tokamaks. ► The indicators have at least the same selection power of the classic indicators for databases of low dimensionality. ► For the high dimensionality dataset the indicator outperforms the traditional criteria. ► The indicator preserves its advantages up to a noise of 20% of the signal level. -- Abstract: The development of computationally efficient model selection strategies represents an important problem facing the analysis of nuclear fusion experimental data, in particular in the field of scaling laws for the extrapolation to future machines, and image processing. In this paper, a new model selection indicator, named Model Falsification Criterion (MFC), will be presented and applied to the problem of choosing the most generalizable scaling laws for the power threshold (P{sub Thresh}) to access the H-mode of confinement in tokamaks. The proposed indicator is based on the properties of the model residuals, their entropy and an implementation of the data falsification principle. The model selection ability of the proposed criterion will be demonstrated in comparison with the most widely used frequentist (Akaike information criterion) and bayesian (Bayesian information criterion) indicators.
机电领域中伺服电机的选择原则%Servo Motor Selection Criterion for Mechatronic Applications
Institute of Scientific and Technical Information of China (English)
王彤
2001-01-01
The selection criterion presented in this paper separates the motor characteristics from the load characteristics and its graphical representation facilitates the feasibility check of a certain drive and the comparison between different systems. In addition, it yields the range of possible transmission ratios.%提出的选择原则是将电机特性与负载特性分离开，并用图解的形式表示，这种表示方法使得驱动装置的可行性检查和不同系统间的比较更方便，另外，还提供了传动比的一个可能范围。
DEFF Research Database (Denmark)
Kock, Anders Bredahl
2015-01-01
the tuning parameter by Bayesian Information Criterion (BIC) results in consistent model selection. However, it is also shown that the adaptive Lasso has no power against shrinking alternatives of the form c/T if it is tuned to perform consistent model selection. We show that if the adaptive Lasso is tuned...
Pietrabissa, Antonio
2011-12-01
The admission control problem can be modelled as a Markov decision process (MDP) under the average cost criterion and formulated as a linear programming (LP) problem. The LP formulation is attractive in the present and future communication networks, which support an increasing number of classes of service, since it can be used to explicitly control class-level requirements, such as class blocking probabilities. On the other hand, the LP formulation suffers from scalability problems as the number C of classes increases. This article proposes a new LP formulation, which, even if it does not introduce any approximation, is much more scalable: the problem size reduction with respect to the standard LP formulation is O((C + 1)2/2 C ). Theoretical and numerical simulation results prove the effectiveness of the proposed approach.
An Introduction to Model Selection: Tools and Algorithms
Directory of Open Access Journals (Sweden)
Sébastien Hélie
2006-03-01
Full Text Available Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans prevents the use of Poppers falsification principle (which is the norm in other sciences. Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood, as well as five methods to compare model fits (the likelihood ratio test, Akaikes information criterion, the Bayesian information criterion, bootstrapping and cross-validation, are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed.
DEFF Research Database (Denmark)
Christensen, Jesper Bjerg; Blaabjerg, Karoline; Poulsen, Hanne Damgaard
The hypothesis is that cereal proteases in liquid feed degrade and convert water insoluble storage protein into water soluble protein, which may improve the digestibility of protein in pigs compared with dry feeding. Protein utilization is increased by matching the amino acid (AAs) content...... concentration was analysed in the supernatant after centrifugation. After 15 min., app. 16% of the total protein was soluble and until 8 hours an increase of 5% units was observed. However, from 8 to 48 hours it increased with 10% units for some cultivars. Based on these analyses, cultivars were selected...
Regularity criterion to some liquid crystal models and the Landau-Lifshitz equations in R3
Institute of Scientific and Technical Information of China (English)
FAN JiShan; GUO BoLing
2008-01-01
We consider the regularity problem under the critical condition to some liquid crystal models and the Landau-Lifshitz equations.The Serrin type reularity criteria are obtained in the terms of the Besov spaces.
Regularity criterion to some liquid crystal models and the Landau-Lifshitz equations in R~3
Institute of Scientific and Technical Information of China (English)
2008-01-01
We consider the regularity problem under the critical condition to some liquid crystal models and the Landau-Lifshitz equations. The Serrin type reularity criteria are obtained in the terms of the Besov spaces.
Haddag, Badis; ABED-MERAIM, Farid; BALAN, Tudor
2007-01-01
The aim of this work is to study the strain localization during the plastic deformation of sheets metals. This phenomenon is precursor for the fracture of drawing parts, thus its prediction using advanced behavior models is important in order to obtain safe final parts. Most often, an accurate prediction of localization during forming process requires damage to be included in the simulation. For this purpose, an advanced, anisotropic elastoplastic model, combining isotropic and kinematic hard...
Sparse Modeling of Landmark and Texture Variability using the Orthomax Criterion
DEFF Research Database (Denmark)
Stegmann, Mikkel Bille; Sjöstrand, Karl; Larsen, Rasmus
2006-01-01
In the past decade, statistical shape modeling has been widely popularized in the medical image analysis community. Predominantly, principal component analysis (PCA) has been employed to model biological shape variability. Here, a reparameterization with orthogonal basis vectors is obtained...... and disease characterization. This paper explores the orthomax class of statistical methods for transforming variable loadings into a \\$\\backslash\\$textit{simple structure} which is more easily interpreted by favoring sparsity. Further, we introduce these transformations into a particular framework...
McCarthy, Julie M; Van Iddekinge, Chad H; Lievens, Filip; Kung, Mei-Chuan; Sinar, Evan F; Campion, Michael A
2013-09-01
Considerable evidence suggests that how candidates react to selection procedures can affect their test performance and their attitudes toward the hiring organization (e.g., recommending the firm to others). However, very few studies of candidate reactions have examined one of the outcomes organizations care most about: job performance. We attempt to address this gap by developing and testing a conceptual framework that delineates whether and how candidate reactions might influence job performance. We accomplish this objective using data from 4 studies (total N = 6,480), 6 selection procedures (personality tests, job knowledge tests, cognitive ability tests, work samples, situational judgment tests, and a selection inventory), 5 key candidate reactions (anxiety, motivation, belief in tests, self-efficacy, and procedural justice), 2 contexts (industry and education), 3 continents (North America, South America, and Europe), 2 study designs (predictive and concurrent), and 4 occupational areas (medical, sales, customer service, and technological). Consistent with previous research, candidate reactions were related to test scores, and test scores were related to job performance. Further, there was some evidence that reactions affected performance indirectly through their influence on test scores. Finally, in no cases did candidate reactions affect the prediction of job performance by increasing or decreasing the criterion-related validity of test scores. Implications of these findings and avenues for future research are discussed.
Directory of Open Access Journals (Sweden)
RODRIGUES G.S.
1996-01-01
Full Text Available Climatic similarity has been the primary parameter considered in the selection of sites for the collection and release of natural enemies in classical biological control programs. However, acknowledging the relevance of the composition of biological communities can be essential for improving the record of successful biocontrol projects, in relation to the proper selection of collection sites. We present in this paper an analysis of the plant and mite assemblages in cassava fields of northeastern Brazil. Such analysis is suggested as an additional criterion for the selection of collection sites of mite predators of the cassava green mite, Mononychellus tanajoa (Bondar, in an international biological control program. Contingency TABLES were built using Dice's index as an indicator of significant associations between pairs of species. This analysis enabled the identification of plant and mite species typically found together, indicating interspecific interactions or similar ecological requirements. Finally, a cluster analysis was used to group sites containing similar assemblages. These sites exhibit comparable chances of harboring a given species. Applied at the species-group level, the analysis may assist in better defining sites for the collection of natural enemies to be released in a given region, improving the chances of establishment.
Energy Technology Data Exchange (ETDEWEB)
Yamada, Minoru [Faculty of Electrical and Computer Engineering, Institute of Science and Engineering Kanazawa University, Kakuma-machi, Kanazawa 920-1192 (Japan); Fares, Hesham, E-mail: fares_fares4@yahoo.com [Faculty of Electrical and Computer Engineering, Institute of Science and Engineering Kanazawa University, Kakuma-machi, Kanazawa 920-1192 (Japan); Department of Physics, Faculty of Science, Assiut University, Assiut 71516 (Egypt)
2013-05-01
A generalized theoretical analysis for amplification mechanism in the planar-type Cherenkov laser is given. An electron is represented to be a material wave having temporal and spatial varying phases with finite spreading length. Interaction between the electrons and the electromagnetic (EM) wave is analyzed by counting the quantum statistical properties. The interaction mechanism is classified into the Velocity and Density Modulation (VDM) model and the Energy Level Transition (ELT) model basing on the relation between the wavelength of the EM wave and the electron spreading length. The VDM model is applicable when the wavelength of the EM wave is longer than the electron spreading length as in the microwave region. The dynamic equation of the electron, which is popularly used in the classical Newtonian mechanics, has been derived from the quantum mechanical Schrödinger equation. The amplification of the EM wave can be explained basing on the bunching effect of the electron density in the electron beam. The amplification gain and whose dispersion relation with respect to the electron velocity is given in this paper. On the other hand, the ELT model is applicable for the case that the wavelength of the EM wave is shorter than the electron spreading length as in the optical region. The dynamics of the electron is explained to be caused by the electron transition between different energy levels. The amplification gain and whose dispersion relation with respect to the electron acceleration voltage was derived on the basis of the quantum mechanical density matrix.
Multi-criterion model ensemble of CMIP5 surface air temperature over China
Yang, Tiantian; Tao, Yumeng; Li, Jingjing; Zhu, Qian; Su, Lu; He, Xiaojia; Zhang, Xiaoming
2017-05-01
The global circulation models (GCMs) are useful tools for simulating climate change, projecting future temperature changes, and therefore, supporting the preparation of national climate adaptation plans. However, different GCMs are not always in agreement with each other over various regions. The reason is that GCMs' configurations, module characteristics, and dynamic forcings vary from one to another. Model ensemble techniques are extensively used to post-process the outputs from GCMs and improve the variability of model outputs. Root-mean-square error (RMSE), correlation coefficient (CC, or R) and uncertainty are commonly used statistics for evaluating the performances of GCMs. However, the simultaneous achievements of all satisfactory statistics cannot be guaranteed in using many model ensemble techniques. In this paper, we propose a multi-model ensemble framework, using a state-of-art evolutionary multi-objective optimization algorithm (termed MOSPD), to evaluate different characteristics of ensemble candidates and to provide comprehensive trade-off information for different model ensemble solutions. A case study of optimizing the surface air temperature (SAT) ensemble solutions over different geographical regions of China is carried out. The data covers from the period of 1900 to 2100, and the projections of SAT are analyzed with regard to three different statistical indices (i.e., RMSE, CC, and uncertainty). Among the derived ensemble solutions, the trade-off information is further analyzed with a robust Pareto front with respect to different statistics. The comparison results over historical period (1900-2005) show that the optimized solutions are superior over that obtained simple model average, as well as any single GCM output. The improvements of statistics are varying for different climatic regions over China. Future projection (2006-2100) with the proposed ensemble method identifies that the largest (smallest) temperature changes will happen in the
Sahoo, Debasis; Deck, Caroline; Yoganandan, Narayan; Willinger, Rémy
2016-04-01
The objective of this study was to enhance an existing finite element (FE) head model with composite modeling and a new constitutive law for the skull. The response of the state-of-the-art FE head model was validated in the time domain using data from 15 temporo-parietal impact experiments, conducted with postmortem human surrogates. The new model predicted skull fractures observed in these tests. Further, 70 well-documented head trauma cases were reconstructed. The 15 experiments and 70 real-world head trauma cases were combined to derive skull fracture injury risk curves. The skull internal energy was found to be the best candidate to predict skull failure based on an in depth statistical analysis of different mechanical parameters (force, skull internal energy), head kinematic-based parameter, the head injury criterion (HIC), and skull fracture correlate (SFC). The proposed tolerance limit for 50% risk of skull fracture was associated with 453mJ of internal energy. Statistical analyses were extended for individual impact locations (frontal, occipital and temporo-parietal) and separate injury risk curves were obtained. The 50% risk of skull fracture for each location: frontal: 481mJ, occipital: 457mJ, temporo-parietal: 456mJ of skull internal energy.
Applying a Hybrid MCDM Model for Six Sigma Project Selection
Directory of Open Access Journals (Sweden)
Fu-Kwun Wang
2014-01-01
Full Text Available Six Sigma is a project-driven methodology; the projects that provide the maximum financial benefits and other impacts to the organization must be prioritized. Project selection (PS is a type of multiple criteria decision making (MCDM problem. In this study, we present a hybrid MCDM model combining the decision-making trial and evaluation laboratory (DEMATEL technique, analytic network process (ANP, and the VIKOR method to evaluate and improve Six Sigma projects for reducing performance gaps in each criterion and dimension. We consider the film printing industry of Taiwan as an empirical case. The results show that our study not only can use the best project selection, but can also be used to analyze the gaps between existing performance values and aspiration levels for improving the gaps in each dimension and criterion based on the influential network relation map.
Schmidt-Eisenlohr, F.; Puñal, O.; Klagges, K.; Kirsche, M.
Apart from the general issue of modeling the channel, the PHY and the MAC of wireless networks, there are specific modeling assumptions that are considered for different systems. In this chapter we consider three specific wireless standards and highlight modeling options for them. These are IEEE 802.11 (as example for wireless local area networks), IEEE 802.16 (as example for wireless metropolitan networks) and IEEE 802.15 (as example for body area networks). Each section on these three systems discusses also at the end a set of model implementations that are available today.
广东省台湾相思优树选择技术分析%Study on criterion for selecting Acacia confusa superior trees
Institute of Scientific and Technical Information of China (English)
肖泽鑫; 柳泽鑫; 邹桂逢; 彭剑华; 罗超; 陈翠蓉; 詹潮安
2015-01-01
Based on quantity indexes (DBH, tree height and individual volume), in combination with quality indexes (under-branch height, stem straightness, tapering grade, canopy density and branch size), the selection criterion of Acacia confusa superior trees was established by using method of ifve-dominant-trees contrast. The investigations on A. confusa plus trees selection were carried out by using the designed superior tree selection criteria in nine test sites of Guangdong province, such as Shantou Xihuan Mountain, Chaozhou Gubi Mountain and Xianchun Village, Huizhou Xihu Park Baota Mountain and so on. The 43 candidate trees and 215 contrast trees were ifltered out for statistical analysis. According to the results, a set of technical standards of A. confusa superior trees selection for Guangdong province were summarized as follows:the selected excellent tree’s tree height should be greater than that of 5 average dominant tree, the selected excellent tree’s DBH should be greater than or equal to 5 1.17 times of the average DBH of 5 average dominant tree, the selected excellent tree’s single tree timber volume should be greater than or equal to 1.52 times of that of 5 average dominant tree per tree, and the tree form quality index 0.33A (tapering grade score)+0.32B (stem straightness score)+0.35C (canopy density score) should be greater than 2. Lastly, nineteen superior trees in 43 candidate trees were selected according this criterion and the selected ratio was 44.2%.%采用5株优势木对比法，以胸径、树高、材积为生长指标，同时结合枝下高、通直度、尖削度、枝叶浓密度和侧枝粗等形质指标，在广东省汕头潮阳西环山、潮州饶平石壁山和仙春村、惠州西湖公园宝塔山等9个地点开展了台湾相思的优树调查研究。总共筛选了43株候选优树和215株优势木进行统计分析，并归纳总结出了广东省台湾相思优树选择的技术标准，即优树树高＞5株优势木平
Latent Class Analysis of Incomplete Data via an Entropy-Based Criterion.
Larose, Chantal; Harel, Ofer; Kordas, Katarzyna; Dey, Dipak K
2016-09-01
Latent class analysis is used to group categorical data into classes via a probability model. Model selection criteria then judge how well the model fits the data. When addressing incomplete data, the current methodology restricts the imputation to a single, pre-specified number of classes. We seek to develop an entropy-based model selection criterion that does not restrict the imputation to one number of clusters. Simulations show the new criterion performing well against the current standards of AIC and BIC, while a family studies application demonstrates how the criterion provides more detailed and useful results than AIC and BIC.
Grellety, Thomas; Cousin, Sophie; Letinier, Louis; Bosco-Lévy, Pauline; Hoppe, Stéphanie; Joly, Damien; Penel, Nicolas; Mathoulin-Pelissier, Simone; Italiano, Antoine
2016-10-04
Optimizing patient selection is a necessary step to design better clinical trials. 'Life expectancy' is a frequent inclusion criterion in phase II trial protocols, a measure that is subjective and often difficult to estimate. The aim of this study was to identify factors associated with early death in patients included in phase II studies. We retrospectively collected medical records of patients with advanced solid tumors included in phase II trials in two French Comprehensive Cancer Centers (Bordeaux, Center 1 set; Lille, Center 2 set). We analyzed patients' baseline characteristics. Predictive factors associated with early death (mortality at 3 months) were identified by logistic regression. We built a model (PREDIT, PRognostic factor of Early Death In phase II Trials) based on prognostic factors isolated from the final multivariate model. Center 1 and 2 sets included 303 and 227 patients, respectively. Patients from Center 1 and 2 sets differed in tumor site, urological (26 % vs 15 %) and gastrointestinal (18 % vs 28 %) and in lung metastasis incidence (10 % vs 49 %). Overall survival (OS) at 3 months was 88 % (95 % CI [83.5; 91.0], Center 1 set) and 91 % (95 % CI [86.7; 94.2], Center 2 set). Presence of a 'life expectancy' inclusion criterion did not improve the 3-month OS (HR 0.6, 95 % CI [0.2; 1.2], p = 0.2325). Independent factors of early death were an ECOG score of 2 (OR 13.3, 95%CI [4.1; 43.4]), hyperleukocytosis (OR 5.5, 95 % CI [1.9; 16.3]) and anemia (OR 2.8, 95 % CI [1.1; 7.1]). Same predictive factors but with different association levels were found in the Center 2 set. Using the Center 1 set, ROC analysis shows a good discrimination to predict early death (AUC: 0.89 at 3 months and 0.86 at 6 months). Risk modeling in two independent cancer populations based on simple clinical parameters showed that baseline ECOG of 2, hyperleukocytosis and anemia are strong early-death predictive factors. This model allows identifying patients who may
Launch vehicle selection model
Montoya, Alex J.
1990-01-01
Over the next 50 years, humans will be heading for the Moon and Mars to build scientific bases to gain further knowledge about the universe and to develop rewarding space activities. These large scale projects will last many years and will require large amounts of mass to be delivered to Low Earth Orbit (LEO). It will take a great deal of planning to complete these missions in an efficient manner. The planning of a future Heavy Lift Launch Vehicle (HLLV) will significantly impact the overall multi-year launching cost for the vehicle fleet depending upon when the HLLV will be ready for use. It is desirable to develop a model in which many trade studies can be performed. In one sample multi-year space program analysis, the total launch vehicle cost of implementing the program reduced from 50 percent to 25 percent. This indicates how critical it is to reduce space logistics costs. A linear programming model has been developed to answer such questions. The model is now in its second phase of development, and this paper will address the capabilities of the model and its intended uses. The main emphasis over the past year was to make the model user friendly and to incorporate additional realistic constraints that are difficult to represent mathematically. We have developed a methodology in which the user has to be knowledgeable about the mission model and the requirements of the payloads. We have found a representation that will cut down the solution space of the problem by inserting some preliminary tests to eliminate some infeasible vehicle solutions. The paper will address the handling of these additional constraints and the methodology for incorporating new costing information utilizing learning curve theory. The paper will review several test cases that will explore the preferred vehicle characteristics and the preferred period of construction, i.e., within the next decade, or in the first decade of the next century. Finally, the paper will explore the interaction
Bayesian model evidence for order selection and correlation testing.
Johnston, Leigh A; Mareels, Iven M Y; Egan, Gary F
2011-01-01
Model selection is a critical component of data analysis procedures, and is particularly difficult for small numbers of observations such as is typical of functional MRI datasets. In this paper we derive two Bayesian evidence-based model selection procedures that exploit the existence of an analytic form for the linear Gaussian model class. Firstly, an evidence information criterion is proposed as a model order selection procedure for auto-regressive models, outperforming the commonly employed Akaike and Bayesian information criteria in simulated data. Secondly, an evidence-based method for testing change in linear correlation between datasets is proposed, which is demonstrated to outperform both the traditional statistical test of the null hypothesis of no correlation change and the likelihood ratio test.
Marchenko, Yulia V.
2012-03-01
Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.
Rank-based model selection for multiple ions quantum tomography
Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian
2012-10-01
The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements.
Linear regression model selection using p-values when the model dimension grows
Pokarowski, Piotr; Teisseyre, Paweł
2012-01-01
We consider a new criterion-based approach to model selection in linear regression. Properties of selection criteria based on p-values of a likelihood ratio statistic are studied for families of linear regression models. We prove that such procedures are consistent i.e. the minimal true model is chosen with probability tending to 1 even when the number of models under consideration slowly increases with a sample size. The simulation study indicates that introduced methods perform promisingly when compared with Akaike and Bayesian Information Criteria.
Baudry, Jean-Patrick
2012-01-01
The Integrated Completed Likelihood (ICL) criterion has been proposed by Biernacki et al. (2000) in the model-based clustering framework to select a relevant number of classes and has been used by statisticians in various application areas. A theoretical study of this criterion is proposed. A contrast related to the clustering objective is introduced: the conditional classification likelihood. This yields an estimator and a model selection criteria class. The properties of these new procedures are studied and ICL is proved to be an approximation of one of these criteria. We oppose these results to the current leading point of view about ICL, that it would not be consistent. Moreover these results give insights into the class notion underlying ICL and feed a reflection on the class notion in clustering. General results on penalized minimum contrast criteria and on mixture models are derived, which are interesting in their own right.
Introduction. Modelling natural action selection.
Prescott, Tony J; Bryson, Joanna J; Seth, Anil K
2007-09-29
Action selection is the task of resolving conflicts between competing behavioural alternatives. This theme issue is dedicated to advancing our understanding of the behavioural patterns and neural substrates supporting action selection in animals, including humans. The scope of problems investigated includes: (i) whether biological action selection is optimal (and, if so, what is optimized), (ii) the neural substrates for action selection in the vertebrate brain, (iii) the role of perceptual selection in decision-making, and (iv) the interaction of group and individual action selection. A second aim of this issue is to advance methodological practice with respect to modelling natural action section. A wide variety of computational modelling techniques are therefore employed ranging from formal mathematical approaches through to computational neuroscience, connectionism and agent-based modelling. The research described has broad implications for both natural and artificial sciences. One example, highlighted here, is its application to medical science where models of the neural substrates for action selection are contributing to the understanding of brain disorders such as Parkinson's disease, schizophrenia and attention deficit/hyperactivity disorder.
How many separable sources? Model selection in independent components analysis.
Woods, Roger P; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian.
A comparison of statistical selection strategies for univariate and bivariate log-linear models.
Moses, Tim; Holland, Paul W
2010-11-01
In this study, eight statistical selection strategies were evaluated for selecting the parameterizations of log-linear models used to model the distributions of psychometric tests. The selection strategies included significance tests based on four chi-squared statistics (likelihood ratio, Pearson, Freeman-Tukey, and Cressie-Read) and four additional strategies (Akaike information criterion (AIC), Bayesian information criterion (BIC), consistent Akaike information criterion (CAIC), and a measure attributed to Goodman). The strategies were evaluated in simulations for different log-linear models of univariate and bivariate test-score distributions and two sample sizes. Results showed that all eight selection strategies were most accurate for the largest sample size considered. For univariate distributions, the AIC selection strategy was especially accurate for selecting the correct parameterization of a complex log-linear model and the likelihood ratio chi-squared selection strategy was the most accurate strategy for selecting the correct parameterization of a relatively simple log-linear model. For bivariate distributions, the likelihood ratio chi-squared, Freeman-Tukey chi-squared, BIC, and CAIC selection strategies had similarly high selection accuracies.
Institute of Scientific and Technical Information of China (English)
Shijian YUAN; Dazhi XIAO; Zhubin HE
2004-01-01
A generalized yield criterion is proposed based on the metal plastic deformation mechanics and the fundamental formula in theory of plasticity. Using the generalized yield criterion, the reason is explained that Mises yield criterion and Tresca yield criterion do not completely match with experimental data. It has been shown that the yield criteria of ductile metals depend not only on the quadratic invariant of the deviatoric stress tensor J2, but also on the cubic invariant of the deviatoric stress tensor J3 and the ratio of the yield stress in pure shear to the yield stress in uniaxial tension k/σs. The reason that Mises yield criterion and Tresca yield criterion are not in good agreement with the experimental data is that the effect of J3 and k/σs is neglected.
Model selection in systems biology depends on experimental design.
Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H
2014-06-01
Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.
Xu, Zhiqiang
2017-02-16
Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.
Eigen, D. J.; Davida, G. I.; Northouse, R. A.
1973-01-01
A criterion for characterizing an iteratively trained classifier is presented. The criterion is based on an information theoretic measure that is developed from modeling classifier training iterations as a set of cascaded channels. The criterion is formulated as a figure of merit and as a performance index to check the appropriateness of application of the characterized classifier to an unknown data base and for implementing classifier updates and data selection respectively.
Eigen, D. J.; Davida, G. I.; Northouse, R. A.
1974-01-01
A criterion for characterizing an iteratively trained classifier is presented. The criterion is based on an information theoretic measure that is developed from modeling classifier training iterations as a set of cascaded channels. The criterion is formulated as a figure of merit and as a performance index to check the appropriateness of application of the characterized classifier to an unknown data base and for implementing classifier updates and data selection, respectively.
Model selection and inference a practical information-theoretic approach
Burnham, Kenneth P
1998-01-01
This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...
Bayesian Evidence and Model Selection
Knuth, Kevin H; Malakar, Nabin K; Mubeen, Asim M; Placek, Ben
2014-01-01
In this paper we review the concept of the Bayesian evidence and its application to model selection. The theory is presented along with a discussion of analytic, approximate and numerical techniques. Application to several practical examples within the context of signal processing are discussed.
Model selection by LASSO methods in a change-point model
Ciuperca, Gabriela
2011-01-01
The paper considers a linear regression model with multiple change-points occurring at unknown times. The LASSO technique is very interesting since it allows the parametric estimation, including the change-points, and automatic variable selection simultaneously. The asymptotic properties of the LASSO-type (which has as particular case the LASSO estimator) and of the adaptive LASSO estimators are studied. For this last estimator the oracle properties are proved. In both cases, a model selection criterion is proposed. Numerical examples are provided showing the performances of the adaptive LASSO estimator compared to the LS estimator.
How Many Separable Sources? Model Selection In Independent Components Analysis
DEFF Research Database (Denmark)
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysi...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian.......Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...
Model Selection for Pion Photoproduction
Landay, J; Fernández-Ramírez, C; Hu, B; Molina, R
2016-01-01
Partial-wave analysis of meson and photon-induced reactions is needed to enable the comparison of many theoretical approaches to data. In both energy-dependent and independent parametrizations of partial waves, the selection of the model amplitude is crucial. Principles of the $S$-matrix are implemented to different degree in different approaches, but a many times overlooked aspect concerns the selection of undetermined coefficients and functional forms for fitting, leading to a minimal yet sufficient parametrization. We present an analysis of low-energy neutral pion photoproduction using the Least Absolute Shrinkage and Selection Operator (LASSO) in combination with criteria from information theory and $K$-fold cross validation. These methods are not yet widely known in the analysis of excited hadrons but will become relevant in the era of precision spectroscopy. The principle is first illustrated with synthetic data, then, its feasibility for real data is demonstrated by analyzing the latest available measu...
Cong, Haoxi; Li, Qingmin; Xing, Jinyuan; Li, Jinsong; Chen, Qiang
2015-06-01
The prompt extinction of the secondary arc is critical to the single-phase reclosing of AC transmission lines, including half-wavelength power transmission lines. In this paper, a low-voltage physical experimental platform was established and the motion process of the secondary arc was recorded by a high-speed camera. It was found that the arcing time of the secondary arc rendered a close relationship with its arc length. Through the input and output power energy analysis of the secondary arc, a new critical length criterion for the arcing time was proposed. The arc chain model was then adopted to calculate the arcing time with both the traditional and the proposed critical length criteria, and the simulation results were compared with the experimental data. The study showed that the arcing time calculated from the new critical length criterion gave more accurate results, which can provide a reliable criterion in term of arcing time for modeling and simulation of the secondary arc related with power transmission lines. supported by National Natural Science Foundation of China (Nos. 51277061 and 51420105011)
A Selective Review of Group Selection in High Dimensional Models
Huang, Jian; Ma, Shuangge
2012-01-01
Grouping structures arise naturally in many statistical modeling problems. Several methods have been proposed for variable selection that respect grouping structure in variables. Examples include the group LASSO and several concave group selection methods. In this article, we give a selective review of group selection concerning methodological developments, theoretical properties, and computational algorithms. We pay particular attention to group selection methods involving concave penalties. We address both group selection and bi-level selection methods. We describe several applications of these methods in nonparametric additive models, semiparametric regression, seemingly unrelated regressions, genomic data analysis and genome wide association studies. We also highlight some issues that require further study.
Bayesian information criterion for longitudinal and clustered data.
Jones, Richard H
2011-11-10
When a number of models are fit to the same data set, one method of choosing the 'best' model is to select the model for which Akaike's information criterion (AIC) is lowest. AIC applies when maximum likelihood is used to estimate the unknown parameters in the model. The value of -2 log likelihood for each model fit is penalized by adding twice the number of estimated parameters. The number of estimated parameters includes both the linear parameters and parameters in the covariance structure. Another criterion for model selection is the Bayesian information criterion (BIC). BIC penalizes -2 log likelihood by adding the number of estimated parameters multiplied by the log of the sample size. For large sample sizes, BIC penalizes -2 log likelihood much more than AIC making it harder to enter new parameters into the model. An assumption in BIC is that the observations are independent. In mixed models, the observations are not independent. This paper develops a method for calculating the 'effective sample size' for mixed models based on Fisher's information. The effective sample size replaces the sample size in BIC and can vary from the number of subjects to the number of observations. A number of error models are considered based on a general mixed model including unstructured, compound symmetry.
Selected soil thermal conductivity models
Directory of Open Access Journals (Sweden)
Rerak Monika
2017-01-01
Full Text Available The paper presents collected from the literature models of soil thermal conductivity. This is a very important parameter, which allows one to assess how much heat can be transferred from the underground power cables through the soil. The models are presented in table form, thus when the properties of the soil are given, it is possible to select the most accurate method of calculating its thermal conductivity. Precise determination of this parameter results in designing the cable line in such a way that it does not occur the process of cable overheating.
Xiao, Xia; Qi, Haiyang; Sui, Xiaole; Kikkawa, Takamaro
2017-03-01
The cohesive zone model (CZM) is introduced in the surface acoustic wave (SAW) technique to characterize the interfacial adhesion property of the low-k thin film deposited on the Silicon substrate. The ratio of the two parameters in the CZM, the maximum normal traction and normal interface characteristic length, is derived to evaluate the interfacial adhesion properties quantitatively. In this study, the adhesion criterion to judge the adhesion property is newly proposed by the CZM-SAW technique. The criterion determination processes of two kinds of film, dense and porous Black Diamond with different film thicknesses, are presented in this paper. The interfacial adhesion properties of the dense and porous Black Diamond films with different thicknesses are evaluated by the CZM-SAW technique quantitatively and nondestructively. The quantitative adhesion properties are obtained by fitting the experimental dispersion curves with maximum frequency up to 220 MHz with the theoretical ones. Results of the nondestructive CZM-SAW technique and the destructive nanoscratch exhibit the same trend in adhesion properties, which means that the CZM-SAW technique is a promising method for determining the interfacial adhesion. Meanwhile, the adhesion properties of the detected samples are judged by the determined criterion. The test results show that different test film materials with different film thicknesses ranging from 300 nm to 1000 nm are in different adhered conditions. This paper exhibits the advantage of the CZM-SAW technique which can be a universal method to characterize the film adhesion.
Link, William; Sauer, John R.
2016-01-01
The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.
Link, William A; Sauer, John R
2016-07-01
The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.
Abdoulaye Hama, Nadjibou; Ouahbi, Tariq; Taibi, Said; Souli, Hanène; Fleureau, Jean-Marie; Pantet, Anne
2017-06-01
Non-cohesive soils subjected to a flow may have a behavior in which fine particles migrate through the interstices of the solid skeleton formed by the large particles. This phenomenon is termed internal instability, internal erosion or suffusion, and can occur both in natural soil deposits and also in geotechnical structures such as dams, dikes or barrages. Internal instability of a granular material is its inability to prevent the loss of its fine particles under flow effect. It is geometrically possible if the fine particles can migrate through the pores of the coarse soil matrix and results in a change in its mechanical properties. In this work, we uses the three-dimensional Particle Flow Code (PFC3D/DEM) to study the stability/instability of granular materials and their mechanical behavior. Kenney and Lau criterion sets a safe boundary for engineering design. However, it tends to identify stable soils as unstable ones. The effects of instability and erosion, simulated by clipping fine particles from the grading distribution, on the mechanical behaviour of glass ball samples were analysed. The mechanical properties of eroded samples, in which erosion is simulated and gives a new approach for internal stability. A proposal for a new internal stability criterion is established, it is deduced from the analysis of relations between the mechanical behaviour and internal stability, including material contractance.
Condensation of saturated vapours on isentropic compression: a simple criterion
Energy Technology Data Exchange (ETDEWEB)
Patwardhan, V.S.
1987-01-01
A criterion is derived and tested for determining whether the isentropic compression of saturated vapours leads to superheat or condensation. This criterion needs only values of the critical temperature, the acentric factor and the liquid specific heat. The application of the criterion for selection of a working fluid both for heat pumps and heat engines is discussed.
An Empirical Kaiser Criterion.
Braeken, Johan; van Assen, Marcel A L M
2016-03-31
In exploratory factor analysis (EFA), most popular methods for dimensionality assessment such as the screeplot, the Kaiser criterion, or-the current gold standard-parallel analysis, are based on eigenvalues of the correlation matrix. To further understanding and development of factor retention methods, results on population and sample eigenvalue distributions are introduced based on random matrix theory and Monte Carlo simulations. These results are used to develop a new factor retention method, the Empirical Kaiser Criterion. The performance of the Empirical Kaiser Criterion and parallel analysis is examined in typical research settings, with multiple scales that are desired to be relatively short, but still reliable. Theoretical and simulation results illustrate that the new Empirical Kaiser Criterion performs as well as parallel analysis in typical research settings with uncorrelated scales, but much better when scales are both correlated and short. We conclude that the Empirical Kaiser Criterion is a powerful and promising factor retention method, because it is based on distribution theory of eigenvalues, shows good performance, is easily visualized and computed, and is useful for power analysis and sample size planning for EFA. (PsycINFO Database Record
Model selection for pion photoproduction
Landay, J.; Döring, M.; Fernández-Ramírez, C.; Hu, B.; Molina, R.
2017-01-01
Partial-wave analysis of meson and photon-induced reactions is needed to enable the comparison of many theoretical approaches to data. In both energy-dependent and independent parametrizations of partial waves, the selection of the model amplitude is crucial. Principles of the S matrix are implemented to a different degree in different approaches; but a many times overlooked aspect concerns the selection of undetermined coefficients and functional forms for fitting, leading to a minimal yet sufficient parametrization. We present an analysis of low-energy neutral pion photoproduction using the least absolute shrinkage and selection operator (LASSO) in combination with criteria from information theory and K -fold cross validation. These methods are not yet widely known in the analysis of excited hadrons but will become relevant in the era of precision spectroscopy. The principle is first illustrated with synthetic data; then, its feasibility for real data is demonstrated by analyzing the latest available measurements of differential cross sections (d σ /d Ω ), photon-beam asymmetries (Σ ), and target asymmetry differential cross sections (d σT/d ≡T d σ /d Ω ) in the low-energy regime.
Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects
Directory of Open Access Journals (Sweden)
Guangjie Li
2015-07-01
Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.
Bayesian model selection applied to artificial neural networks used for water resources modeling
Kingston, Greer B.; Maier, Holger R.; Lambert, Martin F.
2008-04-01
Artificial neural networks (ANNs) have proven to be extremely valuable tools in the field of water resources engineering. However, one of the most difficult tasks in developing an ANN is determining the optimum level of complexity required to model a given problem, as there is no formal systematic model selection method. This paper presents a Bayesian model selection (BMS) method for ANNs that provides an objective approach for comparing models of varying complexity in order to select the most appropriate ANN structure. The approach uses Markov Chain Monte Carlo posterior simulations to estimate the evidence in favor of competing models and, in this study, three known methods for doing this are compared in terms of their suitability for being incorporated into the proposed BMS framework for ANNs. However, it is acknowledged that it can be particularly difficult to accurately estimate the evidence of ANN models. Therefore, the proposed BMS approach for ANNs incorporates a further check of the evidence results by inspecting the marginal posterior distributions of the hidden-to-output layer weights, which unambiguously indicate any redundancies in the hidden layer nodes. The fact that this check is available is one of the greatest advantages of the proposed approach over conventional model selection methods, which do not provide such a test and instead rely on the modeler's subjective choice of selection criterion. The advantages of a total Bayesian approach to ANN development, including training and model selection, are demonstrated on two synthetic and one real world water resources case study.
Accurate model selection of relaxed molecular clocks in bayesian phylogenetics.
Baele, Guy; Li, Wai Lok Sibon; Drummond, Alexei J; Suchard, Marc A; Lemey, Philippe
2013-02-01
Recent implementations of path sampling (PS) and stepping-stone sampling (SS) have been shown to outperform the harmonic mean estimator (HME) and a posterior simulation-based analog of Akaike's information criterion through Markov chain Monte Carlo (AICM), in bayesian model selection of demographic and molecular clock models. Almost simultaneously, a bayesian model averaging approach was developed that avoids conditioning on a single model but averages over a set of relaxed clock models. This approach returns estimates of the posterior probability of each clock model through which one can estimate the Bayes factor in favor of the maximum a posteriori (MAP) clock model; however, this Bayes factor estimate may suffer when the posterior probability of the MAP model approaches 1. Here, we compare these two recent developments with the HME, stabilized/smoothed HME (sHME), and AICM, using both synthetic and empirical data. Our comparison shows reassuringly that MAP identification and its Bayes factor provide similar performance to PS and SS and that these approaches considerably outperform HME, sHME, and AICM in selecting the correct underlying clock model. We also illustrate the importance of using proper priors on a large set of empirical data sets.
Determining threshold default risk criterion for trade credit granting
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
To solve the problem of setting threshold default risk criterion to select retailer eligible for trade credit granting, a novel method of solving simultaneous equations is proposed. This method is based on the bilevel programming modeling of trade credit decisions as an interaction between supplier and retailer. First, the bilevel programming is set up where the supplier decides on credit terms at the top level considering a retailer's default risk, and the retailer determines the order quantity at the lowe...
Selective Maintenance Model Considering Time Uncertainty
Le Chen; Zhengping Shu; Yuan Li; Xuezhi Lv
2012-01-01
This study proposes a selective maintenance model for weapon system during mission interval. First, it gives relevant definitions and operational process of material support system. Then, it introduces current research on selective maintenance modeling. Finally, it establishes numerical model for selecting corrective and preventive maintenance tasks, considering time uncertainty brought by unpredictability of maintenance procedure, indetermination of downtime for spares and difference of skil...
Shen, Chung-Wei; Chen, Yi-Hau
2015-10-01
Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations.
Bayesian Variable Selection and Computation for Generalized Linear Models with Conjugate Priors.
Chen, Ming-Hui; Huang, Lan; Ibrahim, Joseph G; Kim, Sungduk
2008-07-01
In this paper, we consider theoretical and computational connections between six popular methods for variable subset selection in generalized linear models (GLM's). Under the conjugate priors developed by Chen and Ibrahim (2003) for the generalized linear model, we obtain closed form analytic relationships between the Bayes factor (posterior model probability), the Conditional Predictive Ordinate (CPO), the L measure, the Deviance Information Criterion (DIC), the Aikiake Information Criterion (AIC), and the Bayesian Information Criterion (BIC) in the case of the linear model. Moreover, we examine computational relationships in the model space for these Bayesian methods for an arbitrary GLM under conjugate priors as well as examine the performance of the conjugate priors of Chen and Ibrahim (2003) in Bayesian variable selection. Specifically, we show that once Markov chain Monte Carlo (MCMC) samples are obtained from the full model, the four Bayesian criteria can be simultaneously computed for all possible subset models in the model space. We illustrate our new methodology with a simulation study and a real dataset.
Energy Technology Data Exchange (ETDEWEB)
Mudawar, I.; Galloway, J.E.; Gersey, C.O. [Purdue Univ., West Lafayette, IN (United States)] [and others
1995-12-31
Pool boiling and flow boiling were examined for near-saturated bulk conditions in order to determine the critical heat flux (CHF) trigger mechanism for each. Photographic studies of the wall region revealed features common to both situations. At fluxes below CHF, the vapor coalesces into a wavy layer which permits wetting only in wetting fronts, the portions of the liquid-vapor interface which contact the wall as a result of the interfacial waviness. Close examination of the interfacial features revealed the waves are generated from the lower edge of the heater in pool boiling and the heater`s upstream region in flow boiling. Wavelengths follow predictions based upon the Kelvin-Helmholtz instability criterion. Critical heat flux in both cases occurs when the pressure force exerted upon the interface due to interfacial curvature, which tends to preserve interfacial contact with the wall prior to CHF, is overcome by the momentum of vapor at the site of the first wetting front, causing the interface to lift away from the wall. It is shown this interfacial lift-off criterion facilitates accurate theoretical modeling of CHF in pool boiling and in flow boiling in both straight and curved channels.
Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique
Energy Technology Data Exchange (ETDEWEB)
Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)
1994-11-15
The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.
On the Modified Barkhausen Criterion
DEFF Research Database (Denmark)
Lindberg, Erik; Murali, K.
2016-01-01
Oscillators are normally designed according to the Modified Barkhausen Criterion i.e. the complex pole pair is moved out in RHP so that the linear circuit becomes unstable. By means of the Mancini Phaseshift Oscillator it is demonstrated that the distortion of the oscillator may be minimized by i...... by introducing a nonlinear ”Hewlett Resistor” so that the complex pole-pair is in the RHP for small signals and in the LHP for large signals i.e. the complex pole pair of the instant linearized small signal model is moving around the imaginary axis in the complex frequency plane....
Bayesian Constrained-Model Selection for Factor Analytic Modeling
Peeters, Carel F.W.
2016-01-01
My dissertation revolves around Bayesian approaches towards constrained statistical inference in the factor analysis (FA) model. Two interconnected types of restricted-model selection are considered. These types have a natural connection to selection problems in the exploratory FA (EFA) and confirmatory FA (CFA) model and are termed Type I and Type II model selection. Type I constrained-model selection is taken to mean the determination of the appropriate dimensionality of a model. This type ...
Directory of Open Access Journals (Sweden)
Ingo W Nader
Full Text Available Parameters of the two-parameter logistic model are generally estimated via the expectation-maximization algorithm, which improves initial values for all parameters iteratively until convergence is reached. Effects of initial values are rarely discussed in item response theory (IRT, but initial values were recently found to affect item parameters when estimating the latent distribution with full non-parametric maximum likelihood. However, this method is rarely used in practice. Hence, the present study investigated effects of initial values on item parameter bias and on recovery of item characteristic curves in BILOG-MG 3, a widely used IRT software package. Results showed notable effects of initial values on item parameters. For tighter convergence criteria, effects of initial values decreased, but item parameter bias increased, and the recovery of the latent distribution worsened. For practical application, it is advised to use the BILOG default convergence criterion with appropriate initial values when estimating the latent distribution from data.
Directory of Open Access Journals (Sweden)
Yanhui Li
2014-01-01
Full Text Available This paper investigates the Hankel norm filter design problem for stochastic time-delay systems, which are represented by Takagi-Sugeno (T-S fuzzy model. Motivated by the parallel distributed compensation (PDC technique, a novel filtering error system is established. The objective is to design a suitable filter that guarantees the corresponding filtering error system to be mean-square asymptotically stable and to have a specified Hankel norm performance level γ. Based on the Lyapunov stability theory and the Itô differential rule, the Hankel norm criterion is first established by adopting the integral inequality method, which can make some useful efforts in reducing conservativeness. The Hankel norm filtering problem is casted into a convex optimization problem with a convex linearization approach, which expresses all the conditions for the existence of admissible Hankel norm filter as standard linear matrix inequalities (LMIs. The effectiveness of the proposed method is demonstrated via a numerical example.
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
In this paper,algebraic criteria are established to determine whether or not a real coefficient polynomial has one or two pairs of conjugate complex roots whose moduli are equal to 1 and the other roots have moduli less than 1 directly from its coefficients.The form and the function of the criteria are similar to those of the Jury criterion which can be used to determine whether or not all the moduli of the roots of a real coefficient polynomial are less than 1.
Model selection bias and Freedman's paradox
Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.
2010-01-01
In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.
Selected Logistics Models and Techniques.
1984-09-01
ACCESS PROCEDURE: On-Line System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease...System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease arrangement. • SPONSOR: ASD/ACCC
The Impact of Various Class-Distinction Features on Model Selection in the Mixture Rasch Model
Choi, In-Hee; Paek, Insu; Cho, Sun-Joo
2017-01-01
The purpose of the current study is to examine the performance of four information criteria (Akaike's information criterion [AIC], corrected AIC [AICC] Bayesian information criterion [BIC], sample-size adjusted BIC [SABIC]) for detecting the correct number of latent classes in the mixture Rasch model through simulations. The simulation study…
Empirical evaluation of scoring functions for Bayesian network model selection.
Liu, Zhifa; Malone, Brandon; Yuan, Changhe
2012-01-01
In this work, we empirically evaluate the capability of various scoring functions of Bayesian networks for recovering true underlying structures. Similar investigations have been carried out before, but they typically relied on approximate learning algorithms to learn the network structures. The suboptimal structures found by the approximation methods have unknown quality and may affect the reliability of their conclusions. Our study uses an optimal algorithm to learn Bayesian network structures from datasets generated from a set of gold standard Bayesian networks. Because all optimal algorithms always learn equivalent networks, this ensures that only the choice of scoring function affects the learned networks. Another shortcoming of the previous studies stems from their use of random synthetic networks as test cases. There is no guarantee that these networks reflect real-world data. We use real-world data to generate our gold-standard structures, so our experimental design more closely approximates real-world situations. A major finding of our study suggests that, in contrast to results reported by several prior works, the Minimum Description Length (MDL) (or equivalently, Bayesian information criterion (BIC)) consistently outperforms other scoring functions such as Akaike's information criterion (AIC), Bayesian Dirichlet equivalence score (BDeu), and factorized normalized maximum likelihood (fNML) in recovering the underlying Bayesian network structures. We believe this finding is a result of using both datasets generated from real-world applications rather than from random processes used in previous studies and learning algorithms to select high-scoring structures rather than selecting random models. Other findings of our study support existing work, e.g., large sample sizes result in learning structures closer to the true underlying structure; the BDeu score is sensitive to the parameter settings; and the fNML performs pretty well on small datasets. We also
Deviance Information Criterion (DIC) in Bayesian Multiple QTL Mapping.
Shriner, Daniel; Yi, Nengjun
2009-03-15
Mapping multiple quantitative trait loci (QTL) is commonly viewed as a problem of model selection. Various model selection criteria have been proposed, primarily in the non-Bayesian framework. The deviance information criterion (DIC) is the most popular criterion for Bayesian model selection and model comparison but has not been applied to Bayesian multiple QTL mapping. A derivation of the DIC is presented for multiple interacting QTL models and calculation of the DIC is demonstrated using posterior samples generated by Markov chain Monte Carlo (MCMC) algorithms. The DIC measures posterior predictive error by penalizing the fit of a model (deviance) by its complexity, determined by the effective number of parameters. The effective number of parameters simultaneously accounts for the sample size, the cross design, the number and lengths of chromosomes, covariates, the number of QTL, the type of QTL effects, and QTL effect sizes. The DIC provides a computationally efficient way to perform sensitivity analysis and can be used to quantitatively evaluate if including environmental effects, gene-gene interactions, and/or gene-environment interactions in the prior specification is worth the extra parameterization. The DIC has been implemented in the freely available package R/qtlbim, which greatly facilitates the general usage of Bayesian methodology for genome-wide interacting QTL analysis.
A Focused Bayesian Information Criterion
Georges Nguefack-Tsague; Ingo Bulla
2014-01-01
Myriads of model selection criteria (Bayesian and frequentist) have been proposed in the literature aiming at selecting a single model regardless of its intended use. An honorable exception in the frequentist perspective is the “focused information criterion” (FIC) aiming at selecting a model based on the parameter of interest (focus). This paper takes the same view in the Bayesian context; that is, a model may be good for one estimand but bad for another. The proposed method exploits the Bay...
Model selection for the North American Breeding Bird Survey: A comparison of methods
Link, William; Sauer, John; Niven, Daniel
2017-01-01
The North American Breeding Bird Survey (BBS) provides data for >420 bird species at multiple geographic scales over 5 decades. Modern computational methods have facilitated the fitting of complex hierarchical models to these data. It is easy to propose and fit new models, but little attention has been given to model selection. Here, we discuss and illustrate model selection using leave-one-out cross validation, and the Bayesian Predictive Information Criterion (BPIC). Cross-validation is enormously computationally intensive; we thus evaluate the performance of the Watanabe-Akaike Information Criterion (WAIC) as a computationally efficient approximation to the BPIC. Our evaluation is based on analyses of 4 models as applied to 20 species covered by the BBS. Model selection based on BPIC provided no strong evidence of one model being consistently superior to the others; for 14/20 species, none of the models emerged as superior. For the remaining 6 species, a first-difference model of population trajectory was always among the best fitting. Our results show that WAIC is not reliable as a surrogate for BPIC. Development of appropriate model sets and their evaluation using BPIC is an important innovation for the analysis of BBS data.
MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS
Energy Technology Data Exchange (ETDEWEB)
Asensio Ramos, A.; Manso Sainz, R.; Martinez Gonzalez, M. J.; Socas-Navarro, H. [Instituto de Astrofisica de Canarias, E-38205, La Laguna, Tenerife (Spain); Viticchie, B. [ESA/ESTEC RSSD, Keplerlaan 1, 2200 AG Noordwijk (Netherlands); Orozco Suarez, D., E-mail: aasensio@iac.es [National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588 (Japan)
2012-04-01
Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.
Model selection in time series studies of influenza-associated mortality.
Directory of Open Access Journals (Sweden)
Xi-Ling Wang
Full Text Available BACKGROUND: Poisson regression modeling has been widely used to estimate influenza-associated disease burden, as it has the advantage of adjusting for multiple seasonal confounders. However, few studies have discussed how to judge the adequacy of confounding adjustment. This study aims to compare the performance of commonly adopted model selection criteria in terms of providing a reliable and valid estimate for the health impact of influenza. METHODS: We assessed four model selection criteria: quasi Akaike information criterion (QAIC, quasi bayesian information criterion (QBIC, partial autocorrelation functions of residuals (PACF, and generalized cross-validation (GCV, by separately applying them to select the Poisson model best fitted to the mortality datasets that were simulated under the different assumptions of seasonal confounding. The performance of these criteria was evaluated by the bias and root-mean-square error (RMSE of estimates from the pre-determined coefficients of influenza proxy variable. These four criteria were subsequently applied to an empirical hospitalization dataset to confirm the findings of simulation study. RESULTS: GCV consistently provided smaller biases and RMSEs for the influenza coefficient estimates than QAIC, QBIC and PACF, under the different simulation scenarios. Sensitivity analysis of different pre-determined influenza coefficients, study periods and lag weeks showed that GCV consistently outperformed the other criteria. Similar results were found in applying these selection criteria to estimate influenza-associated hospitalization. CONCLUSIONS: GCV criterion is recommended for selection of Poisson models to estimate influenza-associated mortality and morbidity burden with proper adjustment for confounding. These findings shall help standardize the Poisson modeling approach for influenza disease burden studies.
Directory of Open Access Journals (Sweden)
Anić S.
2007-01-01
Full Text Available Modeling of any complex reaction system is a difficult task. If the system under examination can be in various oscillatory dynamic states, the apparent activation energies corresponding to different pathways may be of crucial importance for this purpose. In that case the activation energies can be determined by means of the main characteristics of an oscillatory process such as pre-oscillatory period, duration of the oscillatory period, the period from the beginning of the process to the end of the last oscillation, number of oscillations and others. All is illustrated on the Bray-Liebhafsky oscillatory reaction.
Genetic search feature selection for affective modeling
DEFF Research Database (Denmark)
Martínez, Héctor P.; Yannakakis, Georgios N.
2010-01-01
Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built...
Liu, Siwei; Rovine, Michael J; Molenaar, Peter C M
2012-03-01
With increasing popularity, growth curve modeling is more and more often considered as the 1st choice for analyzing longitudinal data. Although the growth curve approach is often a good choice, other modeling strategies may more directly answer questions of interest. It is common to see researchers fit growth curve models without considering alterative modeling strategies. In this article we compare 3 approaches for analyzing longitudinal data: repeated measures analysis of variance, covariance pattern models, and growth curve models. As all are members of the general linear mixed model family, they represent somewhat different assumptions about the way individuals change. These assumptions result in different patterns of covariation among the residuals around the fixed effects. In this article, we first indicate the kinds of data that are appropriately modeled by each and use real data examples to demonstrate possible problems associated with the blanket selection of the growth curve model. We then present a simulation that indicates the utility of Akaike information criterion and Bayesian information criterion in the selection of a proper residual covariance structure. The results cast doubt on the popular practice of automatically using growth curve modeling for longitudinal data without comparing the fit of different models. Finally, we provide some practical advice for assessing mean changes in the presence of correlated data.
Model selection for amplitude analysis
Guegan, Baptiste; Stevens, Justin; Williams, Mike
2015-01-01
Model complexity in amplitude analyses is often a priori under-constrained since the underlying theory permits a large number of amplitudes to contribute to most physical processes. The use of an overly complex model results in reduced predictive power and worse resolution on unknown parameters of interest. Therefore, it is common to reduce the complexity by removing from consideration some subset of the allowed amplitudes. This paper studies a data-driven method for limiting model complexity through regularization during regression in the context of a multivariate (Dalitz-plot) analysis. The regularization technique applied greatly improves the performance. A method is also proposed for obtaining the significance of a resonance in a multivariate amplitude analysis.
Directory of Open Access Journals (Sweden)
Rainer Roehe
2016-02-01
Full Text Available Methane produced by methanogenic archaea in ruminants contributes significantly to anthropogenic greenhouse gas emissions. The host genetic link controlling microbial methane production is unknown and appropriate genetic selection strategies are not developed. We used sire progeny group differences to estimate the host genetic influence on rumen microbial methane production in a factorial experiment consisting of crossbred breed types and diets. Rumen metagenomic profiling was undertaken to investigate links between microbial genes and methane emissions or feed conversion efficiency. Sire progeny groups differed significantly in their methane emissions measured in respiration chambers. Ranking of the sire progeny groups based on methane emissions or relative archaeal abundance was consistent overall and within diet, suggesting that archaeal abundance in ruminal digesta is under host genetic control and can be used to genetically select animals without measuring methane directly. In the metagenomic analysis of rumen contents, we identified 3970 microbial genes of which 20 and 49 genes were significantly associated with methane emissions and feed conversion efficiency respectively. These explained 81% and 86% of the respective variation and were clustered in distinct functional gene networks. Methanogenesis genes (e.g. mcrA and fmdB were associated with methane emissions, whilst host-microbiome cross talk genes (e.g. TSTA3 and FucI were associated with feed conversion efficiency. These results strengthen the idea that the host animal controls its own microbiota to a significant extent and open up the implementation of effective breeding strategies using rumen microbial gene abundance as a predictor for difficult-to-measure traits on a large number of hosts. Generally, the results provide a proof of principle to use the relative abundance of microbial genes in the gastrointestinal tract of different species to predict their influence on traits e
Roehe, Rainer; Dewhurst, Richard J; Duthie, Carol-Anne; Rooke, John A; McKain, Nest; Ross, Dave W; Hyslop, Jimmy J; Waterhouse, Anthony; Freeman, Tom C; Watson, Mick; Wallace, R John
2016-02-01
Methane produced by methanogenic archaea in ruminants contributes significantly to anthropogenic greenhouse gas emissions. The host genetic link controlling microbial methane production is unknown and appropriate genetic selection strategies are not developed. We used sire progeny group differences to estimate the host genetic influence on rumen microbial methane production in a factorial experiment consisting of crossbred breed types and diets. Rumen metagenomic profiling was undertaken to investigate links between microbial genes and methane emissions or feed conversion efficiency. Sire progeny groups differed significantly in their methane emissions measured in respiration chambers. Ranking of the sire progeny groups based on methane emissions or relative archaeal abundance was consistent overall and within diet, suggesting that archaeal abundance in ruminal digesta is under host genetic control and can be used to genetically select animals without measuring methane directly. In the metagenomic analysis of rumen contents, we identified 3970 microbial genes of which 20 and 49 genes were significantly associated with methane emissions and feed conversion efficiency respectively. These explained 81% and 86% of the respective variation and were clustered in distinct functional gene networks. Methanogenesis genes (e.g. mcrA and fmdB) were associated with methane emissions, whilst host-microbiome cross talk genes (e.g. TSTA3 and FucI) were associated with feed conversion efficiency. These results strengthen the idea that the host animal controls its own microbiota to a significant extent and open up the implementation of effective breeding strategies using rumen microbial gene abundance as a predictor for difficult-to-measure traits on a large number of hosts. Generally, the results provide a proof of principle to use the relative abundance of microbial genes in the gastrointestinal tract of different species to predict their influence on traits e.g. human metabolism
Model selection for quantitative trait loci mapping in a full-sib family
Directory of Open Access Journals (Sweden)
Chunfa Tong
2012-01-01
Full Text Available Statistical methods for mapping quantitative trait loci (QTLs in full-sib forest trees, in which the number of alleles and linkage phase can vary from locus to locus, are still not well established. Previous studies assumed that the QTL segregation pattern was fixed throughout the genome in a full-sib family, despite the fact that this pattern can vary among regions of the genome. In this paper, we propose a method for selecting the appropriate model for QTL mapping based on the segregation of different types of markers and QTLs in a full-sib family. The QTL segregation patterns were classified into three types: test cross (1:1 segregation, F2 cross (1:2:1 segregation and full cross (1:1:1:1 segregation. Akaike's information criterion (AIC, the Bayesian information criterion (BIC and the Laplace-empirical criterion (LEC were used to select the most likely QTL segregation pattern. Simulations were used to evaluate the power of these criteria and the precision of parameter estimates. A Windows-based software was developed to run the selected QTL mapping method. A real example is presented to illustrate QTL mapping in forest trees based on an integrated linkage map with various segregation markers. The implications of this method for accurate QTL mapping in outbred species are discussed.
The Ouroboros Model, selected facets.
Thomsen, Knud
2011-01-01
The Ouroboros Model features a biologically inspired cognitive architecture. At its core lies a self-referential recursive process with alternating phases of data acquisition and evaluation. Memory entries are organized in schemata. The activation at a time of part of a schema biases the whole structure and, in particular, missing features, thus triggering expectations. An iterative recursive monitor process termed 'consumption analysis' is then checking how well such expectations fit with successive activations. Mismatches between anticipations based on previous experience and actual current data are highlighted and used for controlling the allocation of attention. A measure for the goodness of fit provides feedback as (self-) monitoring signal. The basic algorithm works for goal directed movements and memory search as well as during abstract reasoning. It is sketched how the Ouroboros Model can shed light on characteristics of human behavior including attention, emotions, priming, masking, learning, sleep and consciousness.
Ferraro, Vittorio; Marinelli, Valerio; Mele, Marilena
2013-04-01
It is known that the best predictions of sky luminances are obtainable by the CIE 15 standard skies model, but the predictions by this model need knowledge of the measured luminance distributions themselves, since a criterion for selecting the type of sky starting from the irradiance values has not found until now. The authors propose a new simple method of applying the CIE model, based on the use of the sky index Si. A comparison between calculated luminance data and data measured in Arcavacata of Rende (Italy), Lyon (France) and Pamplona (Spain) show a good performance of this method in comparison with other methods of calculation of luminance existing in the literature.
Convergent, discriminant, and criterion validity of DSM-5 traits.
Yalch, Matthew M; Hopwood, Christopher J
2016-10-01
Section III of the Diagnostic and Statistical Manual of Mental Disorders (5th edi.; DSM-5; American Psychiatric Association, 2013) contains a system for diagnosing personality disorder based in part on assessing 25 maladaptive traits. Initial research suggests that this aspect of the system improves the validity and clinical utility of the Section II Model. The Computer Adaptive Test of Personality Disorder (CAT-PD; Simms et al., 2011) contains many similar traits as the DSM-5, as well as several additional traits seemingly not covered in the DSM-5. In this study we evaluate the convergent and discriminant validity between the DSM-5 traits, as assessed by the Personality Inventory for DSM-5 (PID-5; Krueger et al., 2012), and CAT-PD in an undergraduate sample, and test whether traits included in the CAT-PD but not the DSM-5 provide incremental validity in association with clinically relevant criterion variables. Results supported the convergent and discriminant validity of the PID-5 and CAT-PD scales in their assessment of 23 out of 25 DSM-5 traits. DSM-5 traits were consistently associated with 11 criterion variables, despite our having intentionally selected clinically relevant criterion constructs not directly assessed by DSM-5 traits. However, the additional CAT-PD traits provided incremental information above and beyond the DSM-5 traits for all criterion variables examined. These findings support the validity of pathological trait models in general and the DSM-5 and CAT-PD models in particular, while also suggesting that the CAT-PD may include additional traits for consideration in future iterations of the DSM-5 system. (PsycINFO Database Record
Fields, David A; Allison, David B
2012-08-01
The objective of this study was to determine the accuracy, precision, bias, and reliability of percent fat (%fat) determined by air-displacement plethysmography (ADP) with the pediatric option against the four-compartment model in 31 children (4.1 ± 1.2 years, 103.3 ± 10.2 cm, 17.5 ± 3.4 kg). %Fat was determined by (BOD POD Body Composition System; COSMED USA, Concord, CA) with the pediatric option. Total body water (TBW) was determined by isotope dilution ((2)H(2)O; 0.2 g/kg) while bone mineral was determined by dual-energy X-ray absorptiometry (DXA) (Lunar iDXA v13.31; GE, Fairfield, CT and analyzed using enCore 2010 software). The four-compartment model by Lohman was used as the criterion measure of %fat. The regression for %fat by ADP vs. %fat by the four-compartment model did not deviate from the line of identity where: y = 0.849(x) + 4.291. ADP explained 75.2% of the variance in %fat by the four-compartment model while the standard error of the estimate (SEE) was 2.09 %fat. The Bland-Altman analysis showed %fat by ADP did not exhibit any bias across the range of fatness (r = 0.04; P = 0.81). The reliability of ADP was assessed by the coefficient of variation (CV), within-subject SD, and Cronbach's α. The CV was 3.5%, within-subject SD was 0.9%, and Cronbach's α was 0.95. In conclusion, ADP with the pediatric option is accurate, precise, reliable, and without bias in estimating %fat in children 2-6 years old.
Random Effect and Latent Variable Model Selection
Dunson, David B
2008-01-01
Presents various methods for accommodating model uncertainty in random effects and latent variable models. This book focuses on frequentist likelihood ratio and score tests for zero variance components. It also focuses on Bayesian methods for random effects selection in linear mixed effects and generalized linear mixed models
Comparison of six statistical approaches in the selection of appropriate fish growth models
Institute of Scientific and Technical Information of China (English)
ZHU Lixin; LI Lifang; LIANG Zhenlin
2009-01-01
The performance of six statistical approaches, which can be used for selection of the best model to describe the growth of individual fish, was analyzed using simulated and real length-at-age data. The six approaches include coefficient of determination (R2), adjusted coefficient of determination (adj.-R2), root mean squared error (RMSE), Akaike's information criterion (AIC), bias correction of AIC (AICc) and Bayesian information criterion (BIC). The simulation data were generated by five growth models with different numbers of parameters. Four sets of real data were taken from the literature. The parameters in each of the five growth models were estimated using the maximum likelihood method under the assumption of the additive error structure for the data. The best supported model by the data was identified using each of the six approaches. The results show that R2 and RMSE have the same properties and perform worst. The sample size has an effect on the performance of adj.-R2, AIC, AICc and BIC. Adj.-R2 does better in small samples than in large samples. AIC is not suitable to use in small samples and tends to select more complex model when the sample size becomes large. AICc and BIC have best performance in small and large sample cases, respectively. Use of AICc or BIC is recommended for selection of fish growth model according to the size of the length-at-age data.
Directory of Open Access Journals (Sweden)
A. V. Kovalevsky
2007-01-01
Full Text Available The paper considers a principle for determination of a short circuit type which is used in the mathematical model of adaptive micro-processing protection with the purpose to improve sensitivity. As a result of a calculative experiment dependences ΔI(t for various short circuit types (three- and two-phase short circuits have been obtained at a number of points of the investigated power network. These dependences make it possible to determine a numerical value of ΔI coefficient. A comparative analysis has been made to study an operation of adaptive and non-adaptive microprocessing protections in the case of asymmetric damages of the investigated power network just in the same points.
Review and selection of unsaturated flow models
Energy Technology Data Exchange (ETDEWEB)
Reeves, M.; Baker, N.A.; Duguid, J.O. [INTERA, Inc., Las Vegas, NV (United States)
1994-04-04
Since the 1960`s, ground-water flow models have been used for analysis of water resources problems. In the 1970`s, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970`s and well into the 1980`s focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M&O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing.
A Less Conservative Circle Criterion
2008-01-01
A weak form of the Circle Criterion for Lur'e systems is stated. The result allows prove global boundedness of all system solutions. Moreover such a result can be employed to enlarge the set of nonlinearities for which the standard Circle Criterion can guarantee absolute stability.
Force criterion of different electrolytes in microchannel
Institute of Scientific and Technical Information of China (English)
Ren Yu-Kun; Yan Hui; Jiang Hong-Yuan; Gu Jian-Zhong; Antonio Ramos
2009-01-01
The control and handling of fluids is central to many applications of the lab-on-chip. This paper analyzes the basic theory of manipulating different electrolytes and finds the two-dimensional model. Coulomb force and dielectric force belonging to the body force of different electrolytes in the microchannel were analyzed. The force criterion at the interface was concluded, and testified by the specific example. Three basic equations were analyzed and applied to simulate the phenomenon. The force criterion was proved to be correct based on the simulation results.
Genetic search feature selection for affective modeling
DEFF Research Database (Denmark)
Martínez, Héctor P.; Yannakakis, Georgios N.
2010-01-01
Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...
A scale invariance criterion for LES parametrizations
Directory of Open Access Journals (Sweden)
Urs Schaefer-Rolffs
2015-01-01
Full Text Available Turbulent kinetic energy cascades in fluid dynamical systems are usually characterized by scale invariance. However, representations of subgrid scales in large eddy simulations do not necessarily fulfill this constraint. So far, scale invariance has been considered in the context of isotropic, incompressible, and three-dimensional turbulence. In the present paper, the theory is extended to compressible flows that obey the hydrostatic approximation, as well as to corresponding subgrid-scale parametrizations. A criterion is presented to check if the symmetries of the governing equations are correctly translated into the equations used in numerical models. By applying scaling transformations to the model equations, relations between the scaling factors are obtained by demanding that the mathematical structure of the equations does not change.The criterion is validated by recovering the breakdown of scale invariance in the classical Smagorinsky model and confirming scale invariance for the Dynamic Smagorinsky Model. The criterion also shows that the compressible continuity equation is intrinsically scale-invariant. The criterion also proves that a scale-invariant turbulent kinetic energy equation or a scale-invariant equation of motion for a passive tracer is obtained only with a dynamic mixing length. For large-scale atmospheric flows governed by the hydrostatic balance the energy cascade is due to horizontal advection and the vertical length scale exhibits a scaling behaviour that is different from that derived for horizontal length scales.
Directory of Open Access Journals (Sweden)
Sveiczer Akos
2006-03-01
Full Text Available Abstract Background There is considerable controversy concerning the exact growth profile of size parameters during the cell cycle. Linear, exponential and bilinear models are commonly considered, and the same model may not apply for all species. Selection of the most adequate model to describe a given data-set requires the use of quantitative model selection criteria, such as the partial (sequential F-test, the Akaike information criterion and the Schwarz Bayesian information criterion, which are suitable for comparing differently parameterized models in terms of the quality and robustness of the fit but have not yet been used in cell growth-profile studies. Results Length increase data from representative individual fission yeast (Schizosaccharomyces pombe cells measured on time-lapse films have been reanalyzed using these model selection criteria. To fit the data, an extended version of a recently introduced linearized biexponential (LinBiExp model was developed, which makes possible a smooth, continuously differentiable transition between two linear segments and, hence, allows fully parametrized bilinear fittings. Despite relatively small differences, essentially all the quantitative selection criteria considered here indicated that the bilinear model was somewhat more adequate than the exponential model for fitting these fission yeast data. Conclusion A general quantitative framework was introduced to judge the adequacy of bilinear versus exponential models in the description of growth time-profiles. For single cell growth, because of the relatively limited data-range, the statistical evidence is not strong enough to favor one model clearly over the other and to settle the bilinear versus exponential dispute. Nevertheless, for the present individual cell growth data for fission yeast, the bilinear model seems more adequate according to all metrics, especially in the case of wee1Δ cells.
On identifying the optimal number of population clusters via the deviance information criterion.
Gao, Hong; Bryc, Katarzyna; Bustamante, Carlos D
2011-01-01
Inferring population structure using bayesian clustering programs often requires a priori specification of the number of subpopulations, K, from which the sample has been drawn. Here, we explore the utility of a common bayesian model selection criterion, the Deviance Information Criterion (DIC), for estimating K. We evaluate the accuracy of DIC, as well as other popular approaches, on datasets generated by coalescent simulations under various demographic scenarios. We find that DIC outperforms competing methods in many genetic contexts, validating its application in assessing population structure.
Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang
2014-12-01
Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.
Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang
2014-12-01
Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.
Model selection for Gaussian kernel PCA denoising
DEFF Research Database (Denmark)
Jørgensen, Kasper Winther; Hansen, Lars Kai
2012-01-01
We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...
Melody Track Selection Using Discriminative Language Model
Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong
In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.
Institute of Scientific and Technical Information of China (English)
XU Xianghua; HE lin
2006-01-01
In phonetic decision tree based state tying, decision trees with varying leaf nodes denote models with different complexity. By studying the influence of model complexity on system performance and speaker adaptation, a decision tree dynamic pruning method based on Minimum Description Length (MDL) criterion is presented. In the method, a well-trained,large-sized phonetic decision tree is selected as an initial model set, and model complexity is computed by adding a penalty parameter which alters according to the amount of adaptation data. Largely attributed to the reasonable selection of initial models and the integration of stochastic and aptotic of MDL criterion, the proposed method gains high performance by combining with speaker adaptation.
Stability Criterion for Humanoid Running
Institute of Scientific and Technical Information of China (English)
LIZhao-Hui; HUANGQiang; LIKe-Jie
2005-01-01
A humanoid robot has high mobility but possibly risks of tipping over. Until now, one main topic on humanoid robots is to study the walking stability; the issue of the running stability has rarely been investigated. The running is different from the walking, and is more difficult to maintain its dynamic stability. The objective of this paper is to study the stability criterion for humanoid running based on the whole dynamics. First, the cycle and the dynamics of running are analyzed. Then, the stability criterion of humanoid running is presented. Finally, the effectiveness of the proposed stability criterion is illustrated by a dynamic simulation example using a dynamic analysis and design system (DADS).
Warren, Dan L; Seifert, Stephanie N
2011-03-01
Maxent, one of the most commonly used methods for inferring species distributions and environmental tolerances from occurrence data, allows users to fit models of arbitrary complexity. Model complexity is typically constrained via a process known as L1 regularization, but at present little guidance is available for setting the appropriate level of regularization, and the effects of inappropriately complex or simple models are largely unknown. In this study, we demonstrate the use of information criterion approaches to setting regularization in Maxent, and we compare models selected using information criteria to models selected using other criteria that are common in the literature. We evaluate model performance using occurrence data generated from a known "true" initial Maxent model, using several different metrics for model quality and transferability. We demonstrate that models that are inappropriately complex or inappropriately simple show reduced ability to infer habitat quality, reduced ability to infer the relative importance of variables in constraining species' distributions, and reduced transferability to other time periods. We also demonstrate that information criteria may offer significant advantages over the methods commonly used in the literature.
Baldassarre, Luca; Pontil, Massimiliano; Mourão-Miranda, Janaina
2017-01-01
Structured sparse methods have received significant attention in neuroimaging. These methods allow the incorporation of domain knowledge through additional spatial and temporal constraints in the predictive model and carry the promise of being more interpretable than non-structured sparse methods, such as LASSO or Elastic Net methods. However, although sparsity has often been advocated as leading to more interpretable models it can also lead to unstable models under subsampling or slight changes of the experimental conditions. In the present work we investigate the impact of using stability/reproducibility as an additional model selection criterion1 on several different sparse (and structured sparse) methods that have been recently applied for fMRI brain decoding. We compare three different model selection criteria: (i) classification accuracy alone; (ii) classification accuracy and overlap between the solutions; (iii) classification accuracy and correlation between the solutions. The methods we consider include LASSO, Elastic Net, Total Variation, sparse Total Variation, Laplacian and Graph Laplacian Elastic Net (GraphNET). Our results show that explicitly accounting for stability/reproducibility during the model optimization can mitigate some of the instability inherent in sparse methods. In particular, using accuracy and overlap between the solutions as a joint optimization criterion can lead to solutions that are more similar in terms of accuracy, sparsity levels and coefficient maps even when different sparsity methods are considered. PMID:28261042
Expert System Model for Educational Personnel Selection
Directory of Open Access Journals (Sweden)
Héctor A. Tabares-Ospina
2013-06-01
Full Text Available The staff selection is a difficult task due to the subjectivity that the evaluation means. This process can be complemented using a system to support decision. This paper presents the implementation of an expert system to systematize the selection process of professors. The management of software development is divided into 4 parts: requirements, design, implementation and commissioning. The proposed system models a specific knowledge through relationships between variables evidence and objective.
Tascon, Marcos; Benavente, Fernando; Castells, Cecilia B; Gagliardi, Leonardo G
2016-08-19
In capillary electrophoresis (CE), resolution (Rs) and selectivity (α) are criteria often used in practice to optimize separations. Nevertheless, when these and other proposed parameters are considered as an elementary criterion for optimization by mathematical maximization, certain issues and inconsistencies appear. In the present work we analyzed the pros and cons of using these parameters as elementary criteria for mathematical optimization of capillary electrophoretic separations. We characterized the requirements of an ideal criterion to qualify separations within the framework of mathematical optimizations and, accordingly, propose: -1- a new elementary criterion (t') and -2- a method to extend this elementary criterion to compose a global function that simultaneously qualifies many different aspects, also called multicriteria optimization function (MCOF). In order to demonstrate this new concept, we employed a group of six alkaloids with closely related structures (harmine, harmaline, harmol, harmalol, harmane and norharmane). On the basis of this system, we present a critical comparison between the new optimization criterion t' and the former elementary criteria. Finally, aimed at validating the proposed methods, we composed an MCOF in which the capillary-electrophoretic separation of the six model compounds is mathematically optimized as a function of pH as the unique variable. Experimental results subsequently confirmed the accuracy of the model.
A one-class kernel fisher criterion for outlier detection.
Dufrenois, Franck
2015-05-01
Recently, Dufrenois and Noyer proposed a one class Fisher's linear discriminant to isolate normal data from outliers. In this paper, a kernelized version of their criterion is presented. Originally on the basis of an iterative optimization process, alternating between subspace selection and clustering, I show here that their criterion has an upper bound making these two problems independent. In particular, the estimation of the label vector is formulated as an unconstrained binary linear problem (UBLP) which can be solved using an iterative perturbation method. Once the label vector is estimated, an optimal projection subspace is obtained by solving a generalized eigenvalue problem. Like many other kernel methods, the performance of the proposed approach depends on the choice of the kernel. Constructed with a Gaussian kernel, I show that the proposed contrast measure is an efficient indicator for selecting an optimal kernel width. This property simplifies the model selection problem which is typically solved by costly (generalized) cross-validation procedures. Initialization, convergence analysis, and computational complexity are also discussed. Lastly, the proposed algorithm is compared with recent novelty detectors on synthetic and real data sets.
DEFF Research Database (Denmark)
Mowlaee, Pejman; Christensen, Mads Græsbøll; Tan, Zheng-Hua
2010-01-01
The problem of detecting the number of speakers for a particular segment occurs in many dif- ferent speech applications. In single channel speech separation, for example, this information is often used to simplify the separation process, as the signal has to be treated differently depending...... on the number of speakers. Inspired by the asymptotic maximum a posteriori rule proposed for model selection, we pose the problem as a model selection problem. More speciﬁcally, we derive a multiple hypotheses test for determining the number of speakers at a frame level in an observed signal based on underlying...... parametric speaker models, trained a priori. The experimental results indicate that the suggested method improves the quality of the separated signals in a single-channel speech separation scenario at different signal-to-signal ratio levels....
Bayesian variable selection for latent class models.
Ghosh, Joyee; Herring, Amy H; Siega-Riz, Anna Maria
2011-09-01
In this article, we develop a latent class model with class probabilities that depend on subject-specific covariates. One of our major goals is to identify important predictors of latent classes. We consider methodology that allows estimation of latent classes while allowing for variable selection uncertainty. We propose a Bayesian variable selection approach and implement a stochastic search Gibbs sampler for posterior computation to obtain model-averaged estimates of quantities of interest such as marginal inclusion probabilities of predictors. Our methods are illustrated through simulation studies and application to data on weight gain during pregnancy, where it is of interest to identify important predictors of latent weight gain classes.
Directory of Open Access Journals (Sweden)
Xiaohong Chen
2017-05-01
Full Text Available The upper tail of a flood frequency distribution is always specifically concerned with flood control. However, different model selection criteria often give different optimal distributions when the focus is on the upper tail of distribution. With emphasis on the upper-tail behavior, five distribution selection criteria including two hypothesis tests and three information-based criteria are evaluated in selecting the best fitted distribution from eight widely used distributions by using datasets from Thames River, Wabash River, Beijiang River and Huai River. The performance of the five selection criteria is verified by using a composite criterion with focus on upper tail events. This paper demonstrated an approach for optimally selecting suitable flood frequency distributions. Results illustrate that (1 there are different selections of frequency distributions in the four rivers by using hypothesis tests and information-based criteria approaches. Hypothesis tests are more likely to choose complex, parametric models, and information-based criteria prefer to choose simple, effective models. Different selection criteria have no particular tendency toward the tail of the distribution; (2 The information-based criteria perform better than hypothesis tests in most cases when the focus is on the goodness of predictions of the extreme upper tail events. The distributions selected by information-based criteria are more likely to be close to true values than the distributions selected by hypothesis test methods in the upper tail of the frequency curve; (3 The proposed composite criterion not only can select the optimal distribution, but also can evaluate the error of estimated value, which often plays an important role in the risk assessment and engineering design. In order to decide on a particular distribution to fit the high flow, it would be better to use the composite criterion.
Estimating seabed scattering mechanisms via Bayesian model selection.
Steininger, Gavin; Dosso, Stan E; Holland, Charles W; Dettmer, Jan
2014-10-01
A quantitative inversion procedure is developed and applied to determine the dominant scattering mechanism (surface roughness and/or volume scattering) from seabed scattering-strength data. The classification system is based on trans-dimensional Bayesian inversion with the deviance information criterion used to select the dominant scattering mechanism. Scattering is modeled using first-order perturbation theory as due to one of three mechanisms: Interface scattering from a rough seafloor, volume scattering from a heterogeneous sediment layer, or mixed scattering combining both interface and volume scattering. The classification system is applied to six simulated test cases where it correctly identifies the true dominant scattering mechanism as having greater support from the data in five cases; the remaining case is indecisive. The approach is also applied to measured backscatter-strength data where volume scattering is determined as the dominant scattering mechanism. Comparison of inversion results with core data indicates the method yields both a reasonable volume heterogeneity size distribution and a good estimate of the sub-bottom depths at which scatterers occur.
Pattern selection in a boundary-layer model of dendritic growth in the presence of impurities
Karma, A.; Kotliar, B. G.
1985-01-01
Presently analyzed, in the context of a boundary-layer model, is the problem of pattern selection in dendritic growth in a situation where impurities are present in the undercooled liquid. It is found that the tip-velocity selection criterion that has been proposed recently for the geometrical model and the boundary-layer model of a pure substance can be extended, in a nontrivial way, to this more complex situation where two coupled diffusion fields (temperature and solute) determine the interface dynamics. This model predicts a sharp enhancement of tip velocity in good qualitative agreement with experiment. This agreement is consistent with the conjecture that a solvability condition can be used to determine the operating point of the dendrite in the full nonlocal problem.
Suboptimal Criterion Learning in Static and Dynamic Environments.
Norton, Elyse H; Fleming, Stephen M; Daw, Nathaniel D; Landy, Michael S
2017-01-01
Humans often make decisions based on uncertain sensory information. Signal detection theory (SDT) describes detection and discrimination decisions as a comparison of stimulus "strength" to a fixed decision criterion. However, recent research suggests that current responses depend on the recent history of stimuli and previous responses, suggesting that the decision criterion is updated trial-by-trial. The mechanisms underpinning criterion setting remain unknown. Here, we examine how observers learn to set a decision criterion in an orientation-discrimination task under both static and dynamic conditions. To investigate mechanisms underlying trial-by-trial criterion placement, we introduce a novel task in which participants explicitly set the criterion, and compare it to a more traditional discrimination task, allowing us to model this explicit indication of criterion dynamics. In each task, stimuli were ellipses with principal orientations drawn from two categories: Gaussian distributions with different means and equal variance. In the covert-criterion task, observers categorized a displayed ellipse. In the overt-criterion task, observers adjusted the orientation of a line that served as the discrimination criterion for a subsequently presented ellipse. We compared performance to the ideal Bayesian learner and several suboptimal models that varied in both computational and memory demands. Under static and dynamic conditions, we found that, in both tasks, observers used suboptimal learning rules. In most conditions, a model in which the recent history of past samples determines a belief about category means fit the data best for most observers and on average. Our results reveal dynamic adjustment of discrimination criterion, even after prolonged training, and indicate how decision criteria are updated over time.
Adverse selection model regarding tobacco consumption
Directory of Open Access Journals (Sweden)
Dumitru MARIN
2006-01-01
Full Text Available The impact of introducing a tax on tobacco consumption can be studied trough an adverse selection model. The objective of the model presented in the following is to characterize the optimal contractual relationship between the governmental authorities and the two type employees: smokers and non-smokers, taking into account that the consumers’ decision to smoke or not represents an element of risk and uncertainty. Two scenarios are run using the General Algebraic Modeling Systems software: one without taxes set on tobacco consumption and another one with taxes set on tobacco consumption, based on an adverse selection model described previously. The results of the two scenarios are compared in the end of the paper: the wage earnings levels and the social welfare in case of a smoking agent and in case of a non-smoking agent.
Adaptive Covariance Estimation with model selection
Biscay, Rolando; Loubes, Jean-Michel
2012-01-01
We provide in this paper a fully adaptive penalized procedure to select a covariance among a collection of models observing i.i.d replications of the process at fixed observation points. For this we generalize previous results of Bigot and al. and propose to use a data driven penalty to obtain an oracle inequality for the estimator. We prove that this method is an extension to the matricial regression model of the work by Baraud.
A Theoretical Model for Selective Exposure Research.
Roloff, Michael E.; Noland, Mark
This study tests the basic assumptions underlying Fishbein's Model of Attitudes by correlating an individual's selective exposure to types of television programs (situation comedies, family drama, and action/adventure) with the attitudinal similarity between individual attitudes and attitudes characterized on the programs. Twenty-three college…
A Difference Criterion for Dimensionality Reduction
Aved, A. J.; Blasch, E.; Peng, J.
2015-12-01
A dynamic data-driven geoscience application includes hyperspectral scene classification which has shown promising potential in many remote-sensing applications. A hyperspectral image of a scene spectral radiance is typically measured by hundreds of contiguous spectral bands or features, ranging from visible/near-infrared (VNIR) to shortwave infrared (SWIR). Spectral-reflectance measurements provide rich information for object detection and classification. On the other hand, they generate a large number of features, resulting in a high dimensional measurement space. However, a large number of features often poses challenges and can result in poor classification performance. This is due to the curse of dimensionality which requires model reduction, uncertainty quantification and optimization for real-world applications. In such situations, feature extraction or selection methods play an important role by significantly reducing the number of features for building classifiers. In this work, we focus on efficient feature extraction using the dynamic data-driven applications systems (DDDAS) paradigm. Many dimension reduction techniques have been proposed in the literature. A well-known technique is Fisher's linear discriminant analysis (LDA). LDA finds the projection matrix that simultaneously maximizes a within class scatter matrix and minimizes a between class scatter matrix. However, LDA requires matrix inverse which can be a major issue when the within matrix is singular. We propose a difference criterion for dimension reduction that does not require a matrix inverse for software implementation. We show how to solve the optimization problem with semi-definite programming. In addition, we establish an error bound for the proposed algorithm. We demonstrate the connection between relief feature selection and a two class formulation of multi-class problems, thereby providing a sound basis for observed benefits associated with this formulation. Finally, we provide
Directory of Open Access Journals (Sweden)
Wells Martin T
2008-05-01
Full Text Available Abstract Background Identifying quantitative trait loci (QTL for both additive and epistatic effects raises the statistical issue of selecting variables from a large number of candidates using a small number of observations. Missing trait and/or marker values prevent one from directly applying the classical model selection criteria such as Akaike's information criterion (AIC and Bayesian information criterion (BIC. Results We propose a two-step Bayesian variable selection method which deals with the sparse parameter space and the small sample size issues. The regression coefficient priors are flexible enough to incorporate the characteristic of "large p small n" data. Specifically, sparseness and possible asymmetry of the significant coefficients are dealt with by developing a Gibbs sampling algorithm to stochastically search through low-dimensional subspaces for significant variables. The superior performance of the approach is demonstrated via simulation study. We also applied it to real QTL mapping datasets. Conclusion The two-step procedure coupled with Bayesian classification offers flexibility in modeling "large p small n" data, especially for the sparse and asymmetric parameter space. This approach can be extended to other settings characterized by high dimension and low sample size.
Model selection for radiochromic film dosimetry
Méndez, Ignasi
2015-01-01
The purpose of this study was to find the most accurate model for radiochromic film dosimetry by comparing different channel independent perturbation models. A model selection approach based on (algorithmic) information theory was followed, and the results were validated using gamma-index analysis on a set of benchmark test cases. Several questions were addressed: (a) whether incorporating the information of the non-irradiated film, by scanning prior to irradiation, improves the results; (b) whether lateral corrections are necessary when using multichannel models; (c) whether multichannel dosimetry produces better results than single-channel dosimetry; (d) which multichannel perturbation model provides more accurate film doses. It was found that scanning prior to irradiation and applying lateral corrections improved the accuracy of the results. For some perturbation models, increasing the number of color channels did not result in more accurate film doses. Employing Truncated Normal perturbations was found to...
Portfolio Selection Model with Derivative Securities
Institute of Scientific and Technical Information of China (English)
王春峰; 杨建林; 蒋祥林
2003-01-01
Traditional portfolio theory assumes that the return rate of portfolio follows normality. However, this assumption is not true when derivative assets are incorporated. In this paper a portfolio selection model is developed based on utility function which can capture asymmetries in random variable distributions. Other realistic conditions are also considered, such as liabilities and integer decision variables. Since the resulting model is a complex mixed-integer nonlinear programming problem, simulated annealing algorithm is applied for its solution. A numerical example is given and sensitivity analysis is conducted for the model.
Aerosol model selection and uncertainty modelling by adaptive MCMC technique
Directory of Open Access Journals (Sweden)
M. Laine
2008-12-01
Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.
The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.
We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.
Industry Software Trustworthiness Criterion Research Based on Business Trustworthiness
Zhang, Jin; Liu, Jun-fei; Jiao, Hai-xing; Shen, Yi; Liu, Shu-yuan
To industry software Trustworthiness problem, an idea aiming to business to construct industry software trustworthiness criterion is proposed. Based on the triangle model of "trustworthy grade definition-trustworthy evidence model-trustworthy evaluating", the idea of business trustworthiness is incarnated from different aspects of trustworthy triangle model for special industry software, power producing management system (PPMS). Business trustworthiness is the center in the constructed industry trustworthy software criterion. Fusing the international standard and industry rules, the constructed trustworthy criterion strengthens the maneuverability and reliability. Quantitive evaluating method makes the evaluating results be intuitionistic and comparable.
A Neurodynamical Model for Selective Visual Attention
Institute of Scientific and Technical Information of China (English)
QU Jing-Yi; WANG Ru-Bin; ZHANG Yuan; DU Ying
2011-01-01
A neurodynamical model for selective visual attention considering orientation preference is proposed. Since orientation preference is one of the most important properties of neurons in the primary visual cortex, it should be fully considered besides external stimuli intensity. By tuning the parameter of orientation preference, the regimes of synchronous dynamics associated with the development of the attention focus are studied. The attention focus is represented by those peripheral neurons that generate spikes synchronously with the central neuron while the activity of other peripheral neurons is suppressed. Such dynamics correspond to the partial synchronization mode. Simulation results show that the model can sequentially select objects with different orientation preferences and has a reliable shift of attention from one object to another, which are consistent with the experimental results that neurons with different orientation preferences are laid out in pinwheel patterns.%A neurodynamical model for selective visual attention considering orientation preference is proposed.Since orientation preference is one of the most important properties of neurons in the primary visual cortex,it should be fully considered besides external stimuli intensity.By tuning the parameter of orientation preference,the regimes of synchronous dynamics associated with the development of the attention focus are studied.The attention focus is represented by those peripheral neurons that generate spikes synchronously with the central neuron while the activity of other peripheral neurons is suppressed.Such dynamics correspond to the partial synchronization mode.Simulation results show that the model can sequentially select objects with different orientation preferences and has a reliable shift of attention from one object to another,which are consistent with the experimental results that neurons with different orientation preferences are laid out in pinwheel patterns.Selective visual
A Planarity Criterion for Graphs
Dosen, Kosta
2012-01-01
It is proven that a connected graph is planar if and only if all its cocycles with at least four edges are "grounded" in the graph. The notion of grounding of this planarity criterion, which is purely combinatorial, stems from the intuitive idea that with planarity there should be a linear ordering of the edges of a cocycle such that in the two subgraphs remaining after the removal of these edges there can be no crossing of disjoint paths that join the vertices of these edges. The proof given in the paper of the right-to-left direction of the equivalence is based on Kuratowski's Theorem for planarity involving $K_{3,3}$ and $K_5$, but the criterion itself does not mention $K_{3,3}$ and $K_5$. Some other variants of the criterion are also shown necessary and sufficient for planarity.
Model structure selection in convolutive mixtures
DEFF Research Database (Denmark)
Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai
2006-01-01
The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: 'Are we actually dealing with a convolutive mixture?'. We try to answer this question for EEG data....
Model structure selection in convolutive mixtures
DEFF Research Database (Denmark)
Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai
2006-01-01
The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....
Skewed factor models using selection mechanisms
Kim, Hyoung-Moon
2015-12-21
Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.
A Failure Criterion for Concrete
DEFF Research Database (Denmark)
Ottosen, N. S.
1977-01-01
A four-parameter failure criterion containing all the three stress invariants explicitly is proposed for short-time loading of concrete. It corresponds to a smooth convex failure surface with curved meridians, which open in the negative direction of the hydrostatic axis, and the trace in the devi......A four-parameter failure criterion containing all the three stress invariants explicitly is proposed for short-time loading of concrete. It corresponds to a smooth convex failure surface with curved meridians, which open in the negative direction of the hydrostatic axis, and the trace...
A Failure Criterion for Concrete
DEFF Research Database (Denmark)
Ottosen, N. S.
1977-01-01
A four-parameter failure criterion containing all the three stress invariants explicitly is proposed for short-time loading of concrete. It corresponds to a smooth convex failure surface with curved meridians, which open in the negative direction of the hydrostatic axis, and the trace in the devi......A four-parameter failure criterion containing all the three stress invariants explicitly is proposed for short-time loading of concrete. It corresponds to a smooth convex failure surface with curved meridians, which open in the negative direction of the hydrostatic axis, and the trace...
Hidalgo, Homero, Jr.
2000-01-01
An innovative methodology for determining structural target mode selection and mode selection based on a specific criterion is presented. An effective approach to single out modes which interact with specific locations on a structure has been developed for the X-33 Launch Vehicle Finite Element Model (FEM). We presented Root-Sum-Square (RSS) displacement method computes resultant modal displacement for each mode at selected degrees of freedom (DOF) and sorts to locate modes with highest values. This method was used to determine modes, which most influenced specific locations/points on the X-33 flight vehicle such as avionics control components, aero-surface control actuators, propellant valve and engine points for use in flight control stability analysis and for flight POGO stability analysis. Additionally, the modal RSS method allows for primary or global target vehicle modes to also be identified in an accurate and efficient manner.
Behavioral optimization models for multicriteria portfolio selection
Directory of Open Access Journals (Sweden)
Mehlawat Mukesh Kumar
2013-01-01
Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.
Multi-dimensional model order selection
Directory of Open Access Journals (Sweden)
Roemer Florian
2011-01-01
Full Text Available Abstract Multi-dimensional model order selection (MOS techniques achieve an improved accuracy, reliability, and robustness, since they consider all dimensions jointly during the estimation of parameters. Additionally, from fundamental identifiability results of multi-dimensional decompositions, it is known that the number of main components can be larger when compared to matrix-based decompositions. In this article, we show how to use tensor calculus to extend matrix-based MOS schemes and we also present our proposed multi-dimensional model order selection scheme based on the closed-form PARAFAC algorithm, which is only applicable to multi-dimensional data. In general, as shown by means of simulations, the Probability of correct Detection (PoD of our proposed multi-dimensional MOS schemes is much better than the PoD of matrix-based schemes.
Bayesian phylogenetic model selection using reversible jump Markov chain Monte Carlo.
Huelsenbeck, John P; Larget, Bret; Alfaro, Michael E
2004-06-01
A common problem in molecular phylogenetics is choosing a model of DNA substitution that does a good job of explaining the DNA sequence alignment without introducing superfluous parameters. A number of methods have been used to choose among a small set of candidate substitution models, such as the likelihood ratio test, the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), and Bayes factors. Current implementations of any of these criteria suffer from the limitation that only a small set of models are examined, or that the test does not allow easy comparison of non-nested models. In this article, we expand the pool of candidate substitution models to include all possible time-reversible models. This set includes seven models that have already been described. We show how Bayes factors can be calculated for these models using reversible jump Markov chain Monte Carlo, and apply the method to 16 DNA sequence alignments. For each data set, we compare the model with the best Bayes factor to the best models chosen using AIC and BIC. We find that the best model under any of these criteria is not necessarily the most complicated one; models with an intermediate number of substitution types typically do best. Moreover, almost all of the models that are chosen as best do not constrain a transition rate to be the same as a transversion rate, suggesting that it is the transition/transversion rate bias that plays the largest role in determining which models are selected. Importantly, the reversible jump Markov chain Monte Carlo algorithm described here allows estimation of phylogeny (and other phylogenetic model parameters) to be performed while accounting for uncertainty in the model of DNA substitution.
Tracking Models for Optioned Portfolio Selection
Liang, Jianfeng
In this paper we study a target tracking problem for the portfolio selection involving options. In particular, the portfolio in question contains a stock index and some European style options on the index. A refined tracking-error-variance methodology is adopted to formulate this problem as a multi-stage optimization model. We derive the optimal solutions based on stochastic programming and optimality conditions. Attention is paid to the structure of the optimal payoff function, which is shown to possess rich properties.
New insights in portfolio selection modeling
Zareei, Abalfazl
2016-01-01
Recent advancements in the field of network theory commence a new line of developments in portfolio selection techniques that stands on the ground of perceiving financial market as a network with assets as nodes and links accounting for various types of relationships among financial assets. In the first chapter, we model the shock propagation mechanism among assets via network theory and provide an approach to construct well-diversified portfolios that are resilient to shock propagation and c...
Construction of 3D Seabed Terrain Model based on the Standard Deviation Criterion%基于标准差准则的海底三维地形模型构建
Institute of Scientific and Technical Information of China (English)
韩富江; 潘胜玲; 王德刚; 来向华
2011-01-01
At present, existing triangulation must be done in the projection plane, so it causes the loss of attribution information in LOP (Local Optimization Procedure). In this paper, a new triangulation criterion based on standard deviation is used. The definition of standard deviation, calculation of standard deviation, and description of standard deviation criterion is investigated. Then the construction algorithm of 3D seabed terrain model based on standard deviation is presented according to the standard deviation criterion. The result of experiment shows that this method improves the rationality of triangulation, the details and precision of seabed terrain model are better than others, and it is better in dealing with special terrain than the algorithm based on empty circum-circle criterion.%针对现有三角剖分需要投影到平面,局部优化时属性丢失的问题,本文采用一种顾及水深属性的三角剖分准则——标准差准则进行三角剖分,并且讨论了标准差的含义、标准差的计算以及标准差准则的描述.根据标准差准则,实现了一种基于标准差差则的海底三维地形模型构建方法.实验结果表明该方法提高了三角剖分的合理性,模型重建的细节与精确程度更高,在处理特殊地形土优于基于空外接圆准则的TIN模型构建方法.
Robust inference in sample selection models
Zhelonkin, Mikhail
2015-11-20
The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.
Bayesian model selection in Gaussian regression
Abramovich, Felix
2009-01-01
We consider a Bayesian approach to model selection in Gaussian linear regression, where the number of predictors might be much larger than the number of observations. From a frequentist view, the proposed procedure results in the penalized least squares estimation with a complexity penalty associated with a prior on the model size. We investigate the optimality properties of the resulting estimator. We establish the oracle inequality and specify conditions on the prior that imply its asymptotic minimaxity within a wide range of sparse and dense settings for "nearly-orthogonal" and "multicollinear" designs.
Model Selection in Data Analysis Competitions
DEFF Research Database (Denmark)
Wind, David Kofoed; Winther, Ole
2014-01-01
The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platform...... Kaggle. In this paper, we will state and try to verify a set of qualitative hypotheses about predictive modelling, both in general and in the scope of data analysis competitions. To verify our hypotheses we will look at previous competitions and their outcomes, use qualitative interviews with top...
Soufan, Othman
2012-09-01
Feature selection is the first task of any learning approach that is applied in major fields of biomedical, bioinformatics, robotics, natural language processing and social networking. In feature subset selection problem, a search methodology with a proper criterion seeks to find the best subset of features describing data (relevance) and achieving better performance (optimality). Wrapper approaches are feature selection methods which are wrapped around a classification algorithm and use a performance measure to select the best subset of features. We analyze the proper design of the objective function for the wrapper approach and highlight an objective based on several classification algorithms. We compare the wrapper approaches to different feature selection methods based on distance and information based criteria. Significant improvement in performance, computational time, and selection of minimally sized feature subsets is achieved by combining different objectives for the wrapper model. In addition, considering various classification methods in the feature selection process could lead to a global solution of desirable characteristics.
The applicability of fair selection models in the South African context
Directory of Open Access Journals (Sweden)
G. K. Huysamen
1995-06-01
Full Text Available This article reviews several models that are aimed at achieving fair selection in situations in which underrepresented groups tend to obtain lower scores on selection tests. Whereas predictive bias is a statistical concept that refers to systematic errors in the prediction of individuals' criterion scores, selection fairness pertains to the extent to which selection results meet certain socio-political demands. The regression and equal-risk models adjust for differences in the criterion-on-test regression lines of different groups. The constant ratio, conditional probability and equal probability models manipulate the test cutoff scores of different groups so that certain ratios formed between different selection outcomes (correct acceptances, correct rejections, incorrect acceptances, incorrect rejections are the same for such groups. The decision-theoretic approach requires that utilities be attached to these different outcomes for different groups. These procedures are not only eminently suited to accommodate calls for affirmative action, but they also serve the cause of transparency. Opsomming Hierdie artikel verskaf 'n oorsig van verskeie modelle om billike keuring te verkry in situasies waar onderverteen-woordigende groepe geneig is om swakker op keuringstoetse te vaar. Terwyl voorspellingsydigheid 'n statistiese begrip is wat betrekking het op stelselmatige foute in die voorspelling van individue se kriteriumtellings, het keuringsbillikheid te make met die mate waarin keuringsresultate aan sekere sosiaal-politieke vereistes voldoen. Die regressieen gelyke-risiko-modelle maak aanpassings vir verskille in die kriterium-op-toetsregressielyne van verskillende groepe. Die konstante-verhoudings, voorwaardelike-waarskynlikheids- en gelyke-waarskynlikheidsmodelle manipuleer die toetsafkappunte van verskillende groepe sodat sekere verhoudings wat tussen keuringsresultate (korrekte aanvaardings, verkeerde aanvaardings, korrekte verwerpings
Energy Criterion for the Spectral Stability of Discrete Breathers
Kevrekidis, Panayotis G.; Cuevas-Maraver, Jesús; Pelinovsky, Dmitry E.
2016-08-01
Discrete breathers are ubiquitous structures in nonlinear anharmonic models ranging from the prototypical example of the Fermi-Pasta-Ulam model to Klein-Gordon nonlinear lattices, among many others. We propose a general criterion for the emergence of instabilities of discrete breathers analogous to the well-established Vakhitov-Kolokolov criterion for solitary waves. The criterion involves the change of monotonicity of the discrete breather's energy as a function of the breather frequency. Our analysis suggests and numerical results corroborate that breathers with increasing (decreasing) energy-frequency dependence are generically unstable in soft (hard) nonlinear potentials.
Criterion Reading Instructional Project (CRIP).
Linden Board of Education, NJ.
This booklet describes the Linden Title I Program between the years 1971-1974, with a focus on the Criterion Reading Instructional Project (CRIP). The program (in Linden, New Jersey) evolved from a supplemental reading and mathematics program to a structured developmental program of language arts designed to meet the needs of primary grade…
Li, Xingfeng; Coyle, Damien; Maguire, Liam; McGinnity, Thomas M; Benali, Habib
2011-07-01
In this paper a model selection algorithm for a nonlinear system identification method is proposed to study functional magnetic resonance imaging (fMRI) effective connectivity. Unlike most other methods, this method does not need a pre-defined structure/model for effective connectivity analysis. Instead, it relies on selecting significant nonlinear or linear covariates for the differential equations to describe the mapping relationship between brain output (fMRI response) and input (experiment design). These covariates, as well as their coefficients, are estimated based on a least angle regression (LARS) method. In the implementation of the LARS method, Akaike's information criterion corrected (AICc) algorithm and the leave-one-out (LOO) cross-validation method were employed and compared for model selection. Simulation comparison between the dynamic causal model (DCM), nonlinear identification method, and model selection method for modelling the single-input-single-output (SISO) and multiple-input multiple-output (MIMO) systems were conducted. Results show that the LARS model selection method is faster than DCM and achieves a compact and economic nonlinear model simultaneously. To verify the efficacy of the proposed approach, an analysis of the dorsal and ventral visual pathway networks was carried out based on three real datasets. The results show that LARS can be used for model selection in an fMRI effective connectivity study with phase-encoded, standard block, and random block designs. It is also shown that the LOO cross-validation method for nonlinear model selection has less residual sum squares than the AICc algorithm for the study.
Inflation model selection meets dark radiation
Tram, Thomas; Vallance, Robert; Vennin, Vincent
2017-01-01
We investigate how inflation model selection is affected by the presence of additional free-streaming relativistic degrees of freedom, i.e. dark radiation. We perform a full Bayesian analysis of both inflation parameters and cosmological parameters taking reheating into account self-consistently. We compute the Bayesian evidence for a few representative inflation scenarios in both the standard ΛCDM model and an extension including dark radiation parametrised by its effective number of relativistic species Neff. Using a minimal dataset (Planck low-l polarisation, temperature power spectrum and lensing reconstruction), we find that the observational status of most inflationary models is unchanged. The exceptions are potentials such as power-law inflation that predict large values for the scalar spectral index that can only be realised when Neff is allowed to vary. Adding baryon acoustic oscillations data and the B-mode data from BICEP2/Keck makes power-law inflation disfavoured, while adding local measurements of the Hubble constant H0 makes power-law inflation slightly favoured compared to the best single-field plateau potentials. This illustrates how the dark radiation solution to the H0 tension would have deep consequences for inflation model selection.
Selecting among competing models of electro-optic, infrared camera system range performance
Nichols, Jonathan M.; Hines, James E.; Nichols, James D.
2013-01-01
Range performance is often the key requirement around which electro-optical and infrared camera systems are designed. This work presents an objective framework for evaluating competing range performance models. Model selection based on the Akaike’s Information Criterion (AIC) is presented for the type of data collected during a typical human observer and target identification experiment. These methods are then demonstrated on observer responses to both visible and infrared imagery in which one of three maritime targets was placed at various ranges. We compare the performance of a number of different models, including those appearing previously in the literature. We conclude that our model-based approach offers substantial improvements over the traditional approach to inference, including increased precision and the ability to make predictions for some distances other than the specific set for which experimental trials were conducted.
Efficiently adapting graphical models for selectivity estimation
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
of the selectivities of the constituent predicates. However, this independence assumption is more often than not wrong, and is considered to be the most common cause of sub-optimal query execution plans chosen by modern query optimizers. We take a step towards a principled and practical approach to performing...... cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss......Query optimizers rely on statistical models that succinctly describe the underlying data. Models are used to derive cardinality estimates for intermediate relations, which in turn guide the optimizer to choose the best query execution plan. The quality of the resulting plan is highly dependent...
The Markowitz model for portfolio selection
Directory of Open Access Journals (Sweden)
MARIAN ZUBIA ZUBIAURRE
2002-06-01
Full Text Available Since its first appearance, The Markowitz model for portfolio selection has been a basic theoretical reference, opening several new development options. However, practically it has not been used among portfolio managers and investment analysts in spite of its success in the theoretical field. With our paper we would like to show how The Markowitz model may be of great help in real stock markets. Through an empirical study we want to verify the capability of Markowitz’s model to present portfolios with higher profitability and lower risk than the portfolio represented by IBEX-35 and IGBM indexes. Furthermore, we want to test suggested efficiency of these indexes as representatives of market theoretical-portfolio.
Model selection for Poisson processes with covariates
Sart, Mathieu
2011-01-01
We observe $n$ inhomogeneous Poisson processes with covariates and aim at estimating their intensities. To handle this problem, we assume that the intensity of each Poisson process is of the form $s (\\cdot, x)$ where $x$ is the covariate and where $s$ is an unknown function. We propose a model selection approach where the models are used to approximate the multivariate function $s$. We show that our estimator satisfies an oracle-type inequality under very weak assumptions both on the intensities and the models. By using an Hellinger-type loss, we establish non-asymptotic risk bounds and specify them under various kind of assumptions on the target function $s$ such as being smooth or composite. Besides, we show that our estimation procedure is robust with respect to these assumptions.
Entropic Priors and Bayesian Model Selection
Brewer, Brendon J
2009-01-01
We demonstrate that the principle of maximum relative entropy (ME), used judiciously, can ease the specification of priors in model selection problems. The resulting effect is that models that make sharp predictions are disfavoured, weakening the usual Bayesian "Occam's Razor". This is illustrated with a simple example involving what Jaynes called a "sure thing" hypothesis. Jaynes' resolution of the situation involved introducing a large number of alternative "sure thing" hypotheses that were possible before we observed the data. However, in more complex situations, it may not be possible to explicitly enumerate large numbers of alternatives. The entropic priors formalism produces the desired result without modifying the hypothesis space or requiring explicit enumeration of alternatives; all that is required is a good model for the prior predictive distribution for the data. This idea is illustrated with a simple rigged-lottery example, and we outline how this idea may help to resolve a recent debate amongst ...
Lin, Zhiyue; Kahrilas, P J; Roman, S; Boris, L; Carlson, D; Pandolfino, J E
2012-08-01
The Integrated Relaxation Pressure (IRP) is the esophageal pressure topography (EPT) metric used for assessing the adequacy of esophagogastric junction (EGJ) relaxation in the Chicago Classification of motility disorders. However, because the IRP value is also influenced by distal esophageal contractility, we hypothesized that its normal limits should vary with different patterns of contractility. Five hundred and twenty two selected EPT studies were used to compare the accuracy of alternative analysis paradigms to that of a motility expert (the 'gold standard'). Chicago Classification metrics were scored manually and used as inputs for MATLAB™ programs that utilized either strict algorithm-based interpretation (fixed abnormal IRP threshold of 15 mmHg) or a classification and regression tree (CART) model that selected variable IRP thresholds depending on the associated esophageal contractility. The sensitivity of the CART model for achalasia (93%) was better than that of the algorithm-based approach (85%) on account of using variable IRP thresholds that ranged from a low value of >10 mmHg to distinguish type I achalasia from absent peristalsis to a high value of >17 mmHg to distinguish type III achalasia from distal esophageal spasm. Additionally, type II achalasia was diagnosed solely by panesophageal pressurization without the IRP entering the algorithm. Automated interpretation of EPT studies more closely mimics that of a motility expert when IRP thresholds for impaired EGJ relaxation are adjusted depending on the pattern of associated esophageal contractility. The range of IRP cutoffs suggested by the CART model ranged from 10 to 17 mmHg. © 2012 Blackwell Publishing Ltd.
The Bohm criterion and sheath formation
Energy Technology Data Exchange (ETDEWEB)
Riemann, K.U. (Bochum Univ. (Germany). Inst. fuer Theoretische Physik 1)
1990-11-01
In the limit of a small Debye length ({lambda}{sub D}{yields}0) the analysis of the plasma boundary layer leads to a two scale problem of a collision free sheath and of a quasineutral presheath. Bohm's criterion expresses a necessary condition for the formation of a stationary sheath in front of a negative absorbing wall. The basic features of the plasma-sheath transition and their relation to the Bohm criterion are discussed and illustrated from a simple cold-ion fluid model. A rigorous kinetic analysis of the vicinity of the sheath edge allows to generalize Bohm's criterion acounting not only for arbitrary ion- and electron distributions, but also for general boundary conditions at the wall. It is shown that the generalized sheath condition is (apart from special exceptions) fulfilled marginally and related to a sheath edge field singularity. Due to this singularity a smooth matching of the presheath and sheath solutions requires an additional transition layer. Previous investigations concerning special problems of the plasma-sheath transition are reviewed in the light of the general relations. (orig.).
Ancestral process and diffusion model with selection
Mano, Shuhei
2008-01-01
The ancestral selection graph in population genetics introduced by Krone and Neuhauser (1997) is an analogue to the coalescent genealogy. The number of ancestral particles, backward in time, of a sample of genes is an ancestral process, which is a birth and death process with quadratic death and linear birth rate. In this paper an explicit form of the number of ancestral particle is obtained, by using the density of the allele frequency in the corresponding diffusion model obtained by Kimura (1955). It is shown that fixation is convergence of the ancestral process to the stationary measure. The time to fixation of an allele is studied in terms of the ancestral process.
Discriminant Validity Assessment: Use of Fornell & Larcker criterion versus HTMT Criterion
Hamid, M. R. Ab; Sami, W.; Mohmad Sidek, M. H.
2017-09-01
Assessment of discriminant validity is a must in any research that involves latent variables for the prevention of multicollinearity issues. Fornell and Larcker criterion is the most widely used method for this purpose. However, a new method has emerged for establishing the discriminant validity assessment through heterotrait-monotrait (HTMT) ratio of correlations method. Therefore, this article presents the results of discriminant validity assessment using these methods. Data from previous study was used that involved 429 respondents for empirical validation of value-based excellence model in higher education institutions (HEI) in Malaysia. From the analysis, the convergent, divergent and discriminant validity were established and admissible using Fornell and Larcker criterion. However, the discriminant validity is an issue when employing the HTMT criterion. This shows that the latent variables under study faced the issue of multicollinearity and should be looked into for further details. This also implied that the HTMT criterion is a stringent measure that could detect the possible indiscriminant among the latent variables. In conclusion, the instrument which consisted of six latent variables was still lacking in terms of discriminant validity and should be explored further.
Inflation Model Selection meets Dark Radiation
Tram, Thomas; Vennin, Vincent
2016-01-01
We investigate how inflation model selection is affected by the presence of additional free-streaming relativistic degrees of freedom, i.e. dark radiation. We perform a full Bayesian analysis of both inflation parameters and cosmological parameters taking reheating into account self-consistently. We compute the Bayesian evidence for a few representative inflation scenarios in both the standard $\\Lambda\\mathrm{CDM}$ model and an extension including dark radiation parametrised by its effective number of relativistic species $N_\\mathrm{eff}$. We find that the observational status of most inflationary models is unchanged, with the exception of potentials such as power-law inflation that predict a value for the scalar spectral index that is too large in $\\Lambda\\mathrm{CDM}$ but which can be accommodated when $N_\\mathrm{eff}$ is allowed to vary. In this case, cosmic microwave background data indicate that power-law inflation is one of the best models together with plateau potentials. However, contrary to plateau p...
Failure Criterion for Brick Masonry: A Micro-Mechanics Approach
Directory of Open Access Journals (Sweden)
Kawa Marek
2015-02-01
Full Text Available The paper deals with the formulation of failure criterion for an in-plane loaded masonry. Using micro-mechanics approach the strength estimation for masonry microstructure with constituents obeying the Drucker-Prager criterion is determined numerically. The procedure invokes lower bound analysis: for assumed stress fields constructed within masonry periodic cell critical load is obtained as a solution of constrained optimization problem. The analysis is carried out for many different loading conditions at different orientations of bed joints. The performance of the approach is verified against solutions obtained for corresponding layered and block microstructures, which provides the upper and lower strength bounds for masonry microstructure, respectively. Subsequently, a phenomenological anisotropic strength criterion for masonry microstructure is proposed. The criterion has a form of conjunction of Jaeger critical plane condition and Tsai-Wu criterion. The model proposed is identified based on the fitting of numerical results obtained from the microstructural analysis. Identified criterion is then verified against results obtained for different loading orientations. It appears that strength of masonry microstructure can be satisfactorily described by the criterion proposed.
High-dimensional model estimation and model selection
CERN. Geneva
2015-01-01
I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.
Fuzzy modelling for selecting headgear types.
Akçam, M Okan; Takada, Kenji
2002-02-01
The purpose of this study was to develop a computer-assisted inference model for selecting appropriate types of headgear appliance for orthodontic patients and to investigate its clinical versatility as a decision-making aid for inexperienced clinicians. Fuzzy rule bases were created for degrees of overjet, overbite, and mandibular plane angle variables, respectively, according to subjective criteria based on the clinical experience and knowledge of the authors. The rules were then transformed into membership functions and the geometric mean aggregation was performed to develop the inference model. The resultant fuzzy logic was then tested on 85 cases in which the patients had been diagnosed as requiring headgear appliances. Eight experienced orthodontists judged each of the cases, and decided if they 'agreed', 'accepted', or 'disagreed' with the recommendations of the computer system. Intra-examiner agreements were investigated using repeated judgements of a set of 30 orthodontic cases and the kappa statistic. All of the examiners exceeded a kappa score of 0.7, allowing them to participate in the test run of the validity of the proposed inference model. The examiners' agreement with the system's recommendations was evaluated statistically. The average satisfaction rate of the examiners was 95.6 per cent and, for 83 out of the 85 cases, 97.6 per cent. The majority of the examiners (i.e. six or more out of the eight) were satisfied with the recommendations of the system. Thus, the usefulness of the proposed inference logic was confirmed.
SLAM: A Connectionist Model for Attention in Visual Selection Tasks.
Phaf, R. Hans; And Others
1990-01-01
The SeLective Attention Model (SLAM) performs visual selective attention tasks and demonstrates that object selection and attribute selection are both necessary and sufficient for visual selection. The SLAM is described, particularly with regard to its ability to represent an individual subject performing filtering tasks. (TJH)
Estimation of a multivariate mean under model selection uncertainty
Directory of Open Access Journals (Sweden)
Georges Nguefack-Tsague
2014-05-01
Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty. When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.
Liddle, A R; Kunz, M; Mukherjee, P; Parkinson, D; Trotta, R; Liddle, Andrew R; Corasaniti, Pier Stefano; Kunz, Martin; Mukherjee, Pia; Parkinson, David; Trotta, Roberto
2007-01-01
In astro-ph/0702542v2, Linder and Miquel seek to criticize the use of Bayesian model selection for data analysis and for survey forecasting and design. Their discussion is based on three serious misunderstandings of the conceptual underpinnings and application of model-level Bayesian inference, which invalidate all their main conclusions. Their paper includes numerous further inaccuracies, including an erroneous calculation of the Bayesian Information Criterion. Here we seek to set the record straight.
A Criterion for Regular Sequences
Indian Academy of Sciences (India)
D P Patil; U Storch; J Stückrad
2004-05-01
Let be a commutative noetherian ring and $f_1,\\ldots,f_r \\in R$. In this article we give (cf. the Theorem in $\\mathcal{x}$2) a criterion for $f_1,\\ldots,f_r$ to be regular sequence for a finitely generated module over which strengthens and generalises a result in [2]. As an immediate consequence we deduce that if $V(g_1,\\ldots,g_r) \\subseteq V(f_1,\\ldots,f_r)$ in Spec and if $f_1,\\ldots,f_r$ is a regular sequence in , then $g_1,\\ldots,g_r$ is also a regular sequence in .
Formation Criterion for Synthetic Jets
2005-10-01
formation data for the axisymmetric case were published over 50 years ago by Ingard and Labate.10 More recent studies33,34 suggest that L0/d > 1 for...with the axisymmetric data from Ingard and Labate10 and Smith et al.33 are compared in Fig. 7. It is found that the available data are consis- tent with...the jet formation criterion with an empirically determined constant K equal to approximately 0.16. The deviation of Ingard and Labate’s data at their
Uncertainty Relation and Inseparability Criterion
Goswami, Ashutosh K.; Panigrahi, Prasanta K.
2016-11-01
We investigate the Peres-Horodecki positive partial transpose criterion in the context of conserved quantities and derive a condition of inseparability for a composite bipartite system depending only on the dimensions of its subsystems, which leads to a bi-linear entanglement witness for the two qubit system. A separability inequality using generalized Schrodinger-Robertson uncertainty relation taking suitable operators, has been derived, which proves to be stronger than the bi-linear entanglement witness operator. In the case of mixed density matrices, it identically distinguishes the separable and non separable Werner states.
Hidden Markov Model for Stock Selection
Directory of Open Access Journals (Sweden)
Nguyet Nguyen
2015-10-01
Full Text Available The hidden Markov model (HMM is typically used to predict the hidden regimes of observation data. Therefore, this model finds applications in many different areas, such as speech recognition systems, computational molecular biology and financial market predictions. In this paper, we use HMM for stock selection. We first use HMM to make monthly regime predictions for the four macroeconomic variables: inflation (consumer price index (CPI, industrial production index (INDPRO, stock market index (S&P 500 and market volatility (VIX. At the end of each month, we calibrate HMM’s parameters for each of these economic variables and predict its regimes for the next month. We then look back into historical data to find the time periods for which the four variables had similar regimes with the forecasted regimes. Within those similar periods, we analyze all of the S&P 500 stocks to identify which stock characteristics have been well rewarded during the time periods and assign scores and corresponding weights for each of the stock characteristics. A composite score of each stock is calculated based on the scores and weights of its features. Based on this algorithm, we choose the 50 top ranking stocks to buy. We compare the performances of the portfolio with the benchmark index, S&P 500. With an initial investment of $100 in December 1999, over 15 years, in December 2014, our portfolio had an average gain per annum of 14.9% versus 2.3% for the S&P 500.
Directory of Open Access Journals (Sweden)
Sanzo Miyazawa
Full Text Available BACKGROUND: Empirical substitution matrices represent the average tendencies of substitutions over various protein families by sacrificing gene-level resolution. We develop a codon-based model, in which mutational tendencies of codon, a genetic code, and the strength of selective constraints against amino acid replacements can be tailored to a given gene. First, selective constraints averaged over proteins are estimated by maximizing the likelihood of each 1-PAM matrix of empirical amino acid (JTT, WAG, and LG and codon (KHG substitution matrices. Then, selective constraints specific to given proteins are approximated as a linear function of those estimated from the empirical substitution matrices. RESULTS: Akaike information criterion (AIC values indicate that a model allowing multiple nucleotide changes fits the empirical substitution matrices significantly better. Also, the ML estimates of transition-transversion bias obtained from these empirical matrices are not so large as previously estimated. The selective constraints are characteristic of proteins rather than species. However, their relative strengths among amino acid pairs can be approximated not to depend very much on protein families but amino acid pairs, because the present model, in which selective constraints are approximated to be a linear function of those estimated from the JTT/WAG/LG/KHG matrices, can provide a good fit to other empirical substitution matrices including cpREV for chloroplast proteins and mtREV for vertebrate mitochondrial proteins. CONCLUSIONS/SIGNIFICANCE: The present codon-based model with the ML estimates of selective constraints and with adjustable mutation rates of nucleotide would be useful as a simple substitution model in ML and Bayesian inferences of molecular phylogenetic trees, and enables us to obtain biologically meaningful information at both nucleotide and amino acid levels from codon and protein sequences.
A convenient accuracy criterion for time domain FE-calculations
DEFF Research Database (Denmark)
Jensen, Morten Skaarup
1997-01-01
An accuracy criterion that is well suited to tome domain finite element (FE) calculations is presented. It is then used to develop a method for selecting time steps and element meshes that produce accurate results without significantly overburderning the computer. Use of this method is illustrate...
Metropolis Criterion Based Fuzzy Q-Learning Energy Management for Smart Grids
Directory of Open Access Journals (Sweden)
Haibin Yu
2012-12-01
Full Text Available For the energy management problems for demand response in electricity grid, a Metropolis Criterion based fuzzy Q-learning consumer energy management controller (CEMC is proposed. Because of the uncertainties and highly time-varying, it is not easy to accurately obtain the complete information for the consumer behavior in electricity grid. In this case, the Q-learning, which is independent of mathematic model, and prior-knowledge, has good performance. The fuzzy inference and Metropolis Criterion are introduced in order to facilitate generalization in large state space and balance exploration and exploitation in action selection in Q-learning individually. Simulation results show that the proposed controller can learn to take the best action to regulate consumer behavior with the features of low average end-user financial costs and high consumer satisfaction.
An error criterion for determining sampling rates in closed-loop control systems
Brecher, S. M.
1972-01-01
The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.
The detection of observations possibly influential for model selection
Ph.H.B.F. Franses (Philip Hans)
1991-01-01
textabstractModel selection can involve several variables and selection criteria. A simple method to detect observations possibly influential for model selection is proposed. The potentials of this method are illustrated with three examples, each of which is taken from related studies.
Cabras, Stefano; Castellanos, Maria Eugenia; Perra, Silvia
2014-11-20
This paper considers the problem of selecting a set of regressors when the response variable is distributed according to a specified parametric model and observations are censored. Under a Bayesian perspective, the most widely used tools are Bayes factors (BFs), which are undefined when improper priors are used. In order to overcome this issue, fractional (FBF) and intrinsic (IBF) BFs have become common tools for model selection. Both depend on the size, Nt , of a minimal training sample (MTS), while the IBF also depends on the specific MTS used. In the case of regression with censored data, the definition of an MTS is problematic because only uncensored data allow to turn the improper prior into a proper posterior and also because full exploration of the space of the MTSs, which includes also censored observations, is needed to avoid bias in model selection. To address this concern, a sequential MTS was proposed, but it has the drawback of an increase of the number of possible MTSs as Nt becomes random. For this reason, we explore the behaviour of the FBF, contextualizing its definition to censored data. We show that these are consistent, providing also the corresponding fractional prior. Finally, a large simulation study and an application to real data are used to compare IBF, FBF and the well-known Bayesian information criterion.
Multi-Observation Continuous Density Hidden Markov Models for Anomaly Detection in Full Motion Video
2012-06-01
Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) are used to...distribution analysis for scoring Log-likelihood • Automatically select MOCDHMM model via Akaike Information Criteria (AIC) or Bayesian Information Criteria...limited to any particular graphical structure. For example, Xiang uses Bayesian Information Criterion (BICr) and Completed Likelihood Akaike’s Information
Extensions and Applications of the Bohm Criterion
Baalrud, Scott D; Yee, Benjamin; Hopkins, Matthew; Barnat, Edward
2014-01-01
The generalized Bohm criterion is revisited in the context of incorporating kinetic effects of the electron and ion distribution functions into the theory. The underlying assumptions and results of two different approaches are compared: The conventional `kinetic Bohm criterion' and a fluid-moment hierarchy approach. The former is based on the asymptotic limit of an infinitely thin sheath ($\\lambda_D/l =0$), whereas the latter is based on a perturbative expansion of a sheath that is thin compared to the plasma ($\\lambda_D/l \\ll 1$). Here $\\lambda_D$ is the Debye length, which characterizes the sheath length scale, and $l$ is a measure of the plasma or presheath length scale. The consequences of these assumptions are discussed in terms of how they restrict the class of distribution functions to which the resulting criteria can be applied. Two examples are considered to provide concrete comparisons between the two approaches. The first is a Tonks-Langmuir model including a warm ion source [Robertson 2009 \\textit...
Selective experimental review of the Standard Model
Energy Technology Data Exchange (ETDEWEB)
Bloom, E.D.
1985-02-01
Before disussing experimental comparisons with the Standard Model, (S-M) it is probably wise to define more completely what is commonly meant by this popular term. This model is a gauge theory of SU(3)/sub f/ x SU(2)/sub L/ x U(1) with 18 parameters. The parameters are ..cap alpha../sub s/, ..cap alpha../sub qed/, theta/sub W/, M/sub W/ (M/sub Z/ = M/sub W//cos theta/sub W/, and thus is not an independent parameter), M/sub Higgs/; the lepton masses, M/sub e/, M..mu.., M/sub r/; the quark masses, M/sub d/, M/sub s/, M/sub b/, and M/sub u/, M/sub c/, M/sub t/; and finally, the quark mixing angles, theta/sub 1/, theta/sub 2/, theta/sub 3/, and the CP violating phase delta. The latter four parameters appear in the quark mixing matrix for the Kobayashi-Maskawa and Maiani forms. Clearly, the present S-M covers an enormous range of physics topics, and the author can only lightly cover a few such topics in this report. The measurement of R/sub hadron/ is fundamental as a test of the running coupling constant ..cap alpha../sub s/ in QCD. The author will discuss a selection of recent precision measurements of R/sub hadron/, as well as some other techniques for measuring ..cap alpha../sub s/. QCD also requires the self interaction of gluons. The search for the three gluon vertex may be practically realized in the clear identification of gluonic mesons. The author will present a limited review of recent progress in the attempt to untangle such mesons from the plethora q anti q states of the same quantum numbers which exist in the same mass range. The electroweak interactions provide some of the strongest evidence supporting the S-M that exists. Given the recent progress in this subfield, and particularly with the discovery of the W and Z bosons at CERN, many recent reviews obviate the need for further discussion in this report. In attempting to validate a theory, one frequently searches for new phenomena which would clearly invalidate it. 49 references, 28 figures.
Formulation of cross-anisotropic failure criterion for soils
Directory of Open Access Journals (Sweden)
Yi-fei SUN
2013-10-01
Full Text Available Inherently anisotropic soil fabric has a considerable influence on soil strength. To model this kind of inherent anisotropy, a three-dimensional anisotropic failure criterion was proposed, employing a scalar-valued anisotropic variable and a modified general three- dimensional isotropic failure criterion. The scalar-valued anisotropic variable in all sectors of the deviatoric plane was defined by correlating a normalized stress tensor with a normalized fabric tensor. Detailed comparison between the available experimental data and the corresponding model predictions in the deviatoric plane was conducted. The proposed failure criterion was shown to well predict the failure behavior in all sectors, especially in sector II with the Lode angle ranging between 60º and 120º, where the prediction was almost in accordance with test data. However, it was also observed that the proposed criterion overestimated the strength of dense Santa Monica Beach sand in sector III where the intermediate principal stress ratio b varied from approximately 0.2 to 0.8, and slightly underestimated the strength when b was between approximately 0.8 and 1. The difference between the model predictions and experimental data was due to the occurrence of shear bending, which might reduce the measured strength. Therefore, the proposed anisotropic failure criterion has a strong ability to characterize the failure behavior of various soils and potentially allows a better description of the influence of the loading direction with respect to the soil fabric.
Valente, Bruno D.; Morota, Gota; Peñagaricano, Francisco; Gianola, Daniel; Weigel, Kent; Rosa, Guilherme J. M.
2015-01-01
The term “effect” in additive genetic effect suggests a causal meaning. However, inferences of such quantities for selection purposes are typically viewed and conducted as a prediction task. Predictive ability as tested by cross-validation is currently the most acceptable criterion for comparing models and evaluating new methodologies. Nevertheless, it does not directly indicate if predictors reflect causal effects. Such evaluations would require causal inference methods that are not typical in genomic prediction for selection. This suggests that the usual approach to infer genetic effects contradicts the label of the quantity inferred. Here we investigate if genomic predictors for selection should be treated as standard predictors or if they must reflect a causal effect to be useful, requiring causal inference methods. Conducting the analysis as a prediction or as a causal inference task affects, for example, how covariates of the regression model are chosen, which may heavily affect the magnitude of genomic predictors and therefore selection decisions. We demonstrate that selection requires learning causal genetic effects. However, genomic predictors from some models might capture noncausal signal, providing good predictive ability but poorly representing true genetic effects. Simulated examples are used to show that aiming for predictive ability may lead to poor modeling decisions, while causal inference approaches may guide the construction of regression models that better infer the target genetic effect even when they underperform in cross-validation tests. In conclusion, genomic selection models should be constructed to aim primarily for identifiability of causal genetic effects, not for predictive ability. PMID:25908318
An integrated model for supplier selection process
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
In today's highly competitive manufacturing environment, the supplier selection process becomes one of crucial activities in supply chain management. In order to select the best supplier(s) it is not only necessary to continuously tracking and benchmarking performance of suppliers but also to make a tradeoff between tangible and intangible factors some of which may conflict. In this paper an integration of case-based reasoning (CBR), analytical network process (ANP) and linear programming (LP) is proposed to solve the supplier selection problem.
Dealing with selection bias in educational transition models
DEFF Research Database (Denmark)
Holm, Anders; Jæger, Mads Meier
2011-01-01
This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational...... transitions to be correlated across transitions. We use simulated and real data to illustrate how the BPSM improves on the traditional Mare model in terms of correcting for selection bias and providing credible estimates of the effect of family background on educational success. We conclude that models which...... account for selection on unobserved variables and high-quality data are both required in order to estimate credible educational transition models....
Model for personal computer system selection.
Blide, L
1987-12-01
Successful computer software and hardware selection is best accomplished by following an organized approach such as the one described in this article. The first step is to decide what you want to be able to do with the computer. Secondly, select software that is user friendly, well documented, bug free, and that does what you want done. Next, you select the computer, printer and other needed equipment from the group of machines on which the software will run. Key factors here are reliability and compatibility with other microcomputers in your facility. Lastly, you select a reliable vendor who will provide good, dependable service in a reasonable time. The ability to correctly select computer software and hardware is a key skill needed by medical record professionals today and in the future. Professionals can make quality computer decisions by selecting software and systems that are compatible with other computers in their facility, allow for future net-working, ease of use, and adaptability for expansion as new applications are identified. The key to success is to not only provide for your present needs, but to be prepared for future rapid expansion and change in your computer usage as technology and your skills grow.
ADDED VALUE AS EFFICIENCY CRITERION FOR INDUSTRIAL PRODUCTION PROCESS
Directory of Open Access Journals (Sweden)
L. M. Korotkevich
2016-01-01
Full Text Available Literary analysis has shown that the majority of researchers are using classical efficiency criteria for construction of an optimization model for production process: profit maximization; cost minimization; maximization of commercial product output; minimization of back-log for product demand; minimization of total time consumption due to production change. The paper proposes to use an index of added value as an efficiency criterion because it combines economic and social interests of all main interested subjects of the business activity: national government, property owners, employees, investors. The following types of added value have been considered in the paper: joint-stock, market, monetary, economic, notional (gross, net, real. The paper makes suggestion to use an index of real value added as an efficiency criterion. Such approach permits to bring notional added value in comparable variant because added value can be increased not only due to efficiency improvement of enterprise activity but also due to environmental factors – excess in rate of export price increases over rate of import growth. An analysis of methods for calculation of real value added has been made on a country-by-country basis (extrapolation, simple and double deflation. A method of double deflation has been selected on the basis of the executed analysis and it is counted according to the Laspeyires, Paasche, Fischer indices. A conclusion has been made that the used expressions do not take into account fully economic peculiarities of the Republic of Belarus: they are considered as inappropriate in the case when product cost is differentiated according to marketing outlets; they do not take account of difference in rate of several currencies and such approach is reflected in export price of a released product and import price for raw material, supplies and component parts. Taking this into consideration expressions for calculation of real value added have been specified
Peer selecting model based on FCM for wireless distributed P2P files sharing systems
Institute of Scientific and Technical Information of China (English)
LI Xi; JI Hong; ZHENG Rui-ming
2010-01-01
Ⅱn order to improve the performance of wireless distributed peer-to-peer(P2P)files sharing systems,a general system architecture and a novel peer selecting model based on fuzzy cognitive maps(FCM)are proposed in this paper.The new model provides an effective approach on choosing an optimal peer from several resource discovering results for the best file transfer.Compared with the traditional rain-hops scheme that uses hops as the only selecting criterion,the proposed model uses FCM to investigate the complex relationships among various relative factors in wireless environments and gives an overall evaluation score on the candidate.It also has strong scalability for being independent of specified P2P resource discovering protocols.Furthermore,a complete implementation is explained in concrete modules.The simulation results show that the proposed model is effective and feasible compared with rain-hops scheme,with the success transfer rate increased by at least20% and transfer time improved as high as 34%.
LaMont, Colin H
2015-01-01
The failure of the information-based Akaike Information Criterion (AIC) in the context of singular models can be rectified by the definition of a Frequentist Information Criterion (FIC). FIC applies a frequentist approximation to the computation of the model complexity, which can be estimated analytically in many contexts. Like AIC, FIC can be understood as an unbiased estimator of the model predictive performance and is therefore identical to AIC for regular models in the large-observation-number limit ($N\\rightarrow \\infty$) . In the presence of unidentifiable parameters, the complexity exhibits a more general, non-AIC-like scaling ($\\gg N^0$). For instance, both BIC-like ($\\propto\\log N$) and Hannan-Quinn-like ($\\propto \\log \\log N$) scaling with observation number $N$ are observed. Unlike the Bayesian model selection approach, FIC is free from {\\it ad hoc} prior probability distributions and appears to be widely applicable to model selection problems. Finally we demonstrate that FIC (information-based inf...
Generalized cost-criterion-based learning algorithm for diagonal recurrent neural networks
Wang, Yongji; Wang, Hong
2000-05-01
A new generalized cost criterion based learning algorithm for diagonal recurrent neural networks is presented, which is with form of recursive prediction error (RPE) and has second convergent order. A guideline for the choice of the optimal learning rate is derived from convergence analysis. The application of this method to dynamic modeling of typical chemical processes shows that the generalized cost criterion RPE (QRPE) has higher modeling precision than BP trained MLP and quadratic cost criterion trained RPE (QRPE).
Quality Quandaries- Time Series Model Selection and Parsimony
DEFF Research Database (Denmark)
Bisgaard, Søren; Kulahci, Murat
2009-01-01
Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....
Quality Quandaries- Time Series Model Selection and Parsimony
DEFF Research Database (Denmark)
Bisgaard, Søren; Kulahci, Murat
2009-01-01
Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....
Rutherford, William J.; Corbin, Charles B.
1994-01-01
This study established criterion-referenced standards for selected tests of arm and shoulder girdle strength and endurance in college females. Tests of trained and untrained students using the contrasting groups method yielded criterion cutoff scores that classified subjects as trained or untrained based on upper arm and shoulder girdle resistance…
Extended equal areas criterion: foundations and applications
Energy Technology Data Exchange (ETDEWEB)
Yusheng, Xue [Nanjim Automation Research Institute, Nanjim (China)
1994-12-31
The extended equal area criterion (EEAC) provides analytical expressions for ultra fast transient stability assessment, flexible sensitivity analysis, and means to preventive and emergency controls. Its outstanding performances have been demonstrated by thousands upon thousands simulations on more than 50 real power systems and by on-line operation records in an EMS environment of Northeast China Power System since September 1992. However, the researchers have mainly based on heuristics and simulations. This paper lays a theoretical foundation of EEAC and brings to light the mechanism of transient stability. It proves true that the dynamic EEAC furnishes a necessary and sufficient condition for stability of multi machine systems with any detailed models, in the sense of the integration accuracy. This establishes a new platform for further advancing EEAC and better understanding of problems. An overview of EEAC applications in China is also given in this paper. (author) 30 refs.
Cardinality constrained portfolio selection via factor models
Monge, Juan Francisco
2017-01-01
In this paper we propose and discuss different 0-1 linear models in order to solve the cardinality constrained portfolio problem by using factor models. Factor models are used to build portfolios to track indexes, together with other objectives, also need a smaller number of parameters to estimate than the classical Markowitz model. The addition of the cardinality constraints limits the number of securities in the portfolio. Restricting the number of securities in the portfolio allows us to o...
Xu, G; Hughes-Oliver, J M; Brooks, J D; Yeatts, J L; Baynes, R E
2013-01-01
Quantitative structure-activity relationship (QSAR) models are being used increasingly in skin permeation studies. The main idea of QSAR modelling is to quantify the relationship between biological activities and chemical properties, and thus to predict the activity of chemical solutes. As a key step, the selection of a representative and structurally diverse training set is critical to the prediction power of a QSAR model. Early QSAR models selected training sets in a subjective way and solutes in the training set were relatively homogenous. More recently, statistical methods such as D-optimal design or space-filling design have been applied but such methods are not always ideal. This paper describes a comprehensive procedure to select training sets from a large candidate set of 4534 solutes. A newly proposed 'Baynes' rule', which is a modification of Lipinski's 'rule of five', was used to screen out solutes that were not qualified for the study. U-optimality was used as the selection criterion. A principal component analysis showed that the selected training set was representative of the chemical space. Gas chromatograph amenability was verified. A model built using the training set was shown to have greater predictive power than a model built using a previous dataset [1].
Evidence accumulation as a model for lexical selection
Anders, R.; Riès, S.; van Maanen, L.; Alario, F.-X.
2015-01-01
We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of
Comparison of Two Gas Selection Methodologies: An Application of Bayesian Model Averaging
Energy Technology Data Exchange (ETDEWEB)
Renholds, Andrea S.; Thompson, Sandra E.; Anderson, Kevin K.; Chilton, Lawrence K.
2006-03-31
One goal of hyperspectral imagery analysis is the detection and characterization of plumes. Characterization includes identifying the gases in the plumes, which is a model selection problem. Two gas selection methods compared in this report are Bayesian model averaging (BMA) and minimum Akaike information criterion (AIC) stepwise regression (SR). Simulated spectral data from a three-layer radiance transfer model were used to compare the two methods. Test gases were chosen to span the types of spectra observed, which exhibit peaks ranging from broad to sharp. The size and complexity of the search libraries were varied. Background materials were chosen to either replicate a remote area of eastern Washington or feature many common background materials. For many cases, BMA and SR performed the detection task comparably in terms of the receiver operating characteristic curves. For some gases, BMA performed better than SR when the size and complexity of the search library increased. This is encouraging because we expect improved BMA performance upon incorporation of prior information on background materials and gases.
Electromagnetic Selection Rules for \\varvec{^{12}}C in a 3 \\varvec{α } Cluster Model
Fortunato, L.; Stellin, G.; Vitturi, A.
2017-01-01
The recent successful application of the Algebraic Cluster Model to the energy spectrum of ^{12}C has brought a new impetus on spectroscopy of this and other α -conjugate nuclei. In fact, known spectral properties have been reexamined on the basis of vibrations and rotations of three α particles at the vertexes of an equilateral triangle and new excited states have been measured that fit into this scheme. The analysis of this system entails the application of molecular models for rotational-vibrational spectra to the nuclear context and requires deep knowledge of the underlying group-theoretical properties, based on the D_{3h} symmetry, similarly to what is done in quantum chemistry. We have recently analyzed the symmetries of the model and the quantum numbers in great depth, reproducing the all-important results of Wheeler and we have derived electromagnetic selection rules for the system of three α particles, finding, for instance, that electric dipole E1 and magnetic dipole M1 excitations are excluded from the model. The lowest active modes are therefore E2, E3,\\cdots and M2, M3, \\cdots although there are further restrictions between certain types of bands. The selection rules summarized in the text provide a criterion for assigning of observed lines to the alpha cluster model or not and they might help to further unravel the electromagnetic properties of ^{12}C. With the perspective of new facilities (such as ELI) where photo-excitation and photo-dissociation experiments will play a major role, a complete understanding of e.m. selection rules as a tool to confirm or disprove nuclear structure models, is mandatory.
Selection of Temporal Lags When Modeling Economic and Financial Processes.
Matilla-Garcia, Mariano; Ojeda, Rina B; Marin, Manuel Ruiz
2016-10-01
This paper suggests new nonparametric statistical tools and procedures for modeling linear and nonlinear univariate economic and financial processes. In particular, the tools presented help in selecting relevant lags in the model description of a general linear or nonlinear time series; that is, nonlinear models are not a restriction. The tests seem to be robust to the selection of free parameters. We also show that the test can be used as a diagnostic tool for well-defined models.
Maria, G; A Dan; Stefan, D.-N.
2010-01-01
The safe operation of a semi-batch catalytic reactor remains a sensitive issue when highly exothermic side reactions may occur, and various elements such as controllability, stability, safety, and economic aspects have to be considered in the process development. Nominal operating conditions are set to avoid excessive thermal sensitivity to variations in the process parameters. Several shortcuts or model-based methods are used to estimate the safety limits and runaway boundaries for the op...
Aschenbrenner, Mathias; Kulozik, Ulrich; Foerst, Petra
2012-12-01
The aim of this work was to describe the temperature dependence of microbial inactivation for several storage conditions and protective systems (lactose, trehalose and dextran) in relation to the physical state of the sample, i.e. the glassy or non-glassy state. The resulting inactivation rates k were described by applying two models, Arrhenius and Williams-Landel-Ferry (WLF), in order to evaluate the relevance of diffusional limitation as a protective mechanism. The application of the Arrhenius model revealed a significant decrease in activation energy E(a) for storage conditions close to T(g). This finding is an indication that the protective effect of a surrounding glassy matrix can, at least, partly be ascribed to its inherent restricted diffusion and mobility. The application of the WLF model revealed that the temperature dependence of microbial inactivation above T(g) is significantly weaker than predicted by the universal coefficients. Thus, it can be concluded that microbial inactivation is not directly linked with the mechanical relaxation behavior of the surrounding matrix as it was reported for viscosity and crystallization phenomena in case of disaccharide systems. Copyright © 2012. Published by Elsevier Inc.
Zhang, Sun
2015-01-01
In this paper, based on the works of Capozziello et al., we have studied the Noether symmetry approach in the cosmological model with scalar and gauge fields proposed recently by Soda et al. The correct Noether symmetries and related Lie algebra are given according to the minisuperspace quantum cosmological model. The Wheeler-De Witt (WDW) equation is presented after quantization and the classical trajectories are then obtained in the semi-classical limit. The oscillating features of the wave function in the cosmic evolution recover the so-called Hartle criterion, and the selection rule in minisuperspace quantum cosmology is strengthened. Then we have realized now the proposition that Noether symmetries select classical universes.
Criterion Referenced Measures for Clinical Evaluations.
Pikulski, John J.
This paper discusses criterion referenced tests' characteristics and their use in clinical evaluation. The distinction between diagnostic tests and criterion referenced measures is largely a matter of emphasis. Some authorities believe that in diagnostic testing the emphasis is upon an evaluation of an individual's strengths and weaknesses in…
ON A GENERALIZED GAUSS CONVERGENCE CRITERION
Directory of Open Access Journals (Sweden)
ILEANA BUCUR
2015-07-01
Full Text Available In this paper we combine the well known Raabe-Duhamel, Kummer, Bertrand . . . criterions of convergence for series with positive terms and we obtain a new one which is more powerful than those cited before. Even the famous Gauss criterion,which was in fact our starting point, is a consequence of this new convergence test.
A derivability criterion based on the existence of adjunctions
Rodriguez-Gonzalez, Beatriz
2012-01-01
In this paper we introduce a derivability criterion of functors based on the existence of adjunctions rather than on the existence of resolutions. It constitutes a converse of Quillen-Maltsiniotis derived adjunction theorem. We present two applications of our derivability criterion. On the one hand, we prove that the two notions for homotopy colimits corresponding to Grothendieck derivators and Quillen model categories are equivalent. On the other hand, we deduce that the internal hom for derived Morita theory constructed by B. To\\"en is indeed the right derived functor of the internal hom of dg-categories.
The Properties of Model Selection when Retaining Theory Variables
DEFF Research Database (Denmark)
Hendry, David F.; Johansen, Søren
Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...
Model Selection Criteria for Missing-Data Problems Using the EM Algorithm.
Ibrahim, Joseph G; Zhu, Hongtu; Tang, Niansheng
2008-12-01
We consider novel methods for the computation of model selection criteria in missing-data problems based on the output of the EM algorithm. The methodology is very general and can be applied to numerous situations involving incomplete data within an EM framework, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Toward this goal, we develop a class of information criteria for missing-data problems, called IC(H) (,) (Q), which yields the Akaike information criterion and the Bayesian information criterion as special cases. The computation of IC(H) (,) (Q) requires an analytic approximation to a complicated function, called the H-function, along with output from the EM algorithm used in obtaining maximum likelihood estimates. The approximation to the H-function leads to a large class of information criteria, called IC(H̃) (() (k) (),) (Q). Theoretical properties of IC(H̃) (() (k) (),) (Q), including consistency, are investigated in detail. To eliminate the analytic approximation to the H-function, a computationally simpler approximation to IC(H) (,) (Q), called IC(Q), is proposed, the computation of which depends solely on the Q-function of the EM algorithm. Advantages and disadvantages of IC(H̃) (() (k) (),) (Q) and IC(Q) are discussed and examined in detail in the context of missing-data problems. Extensive simulations are given to demonstrate the methodology and examine the small-sample and large-sample performance of IC(H̃) (() (k) (),) (Q) and IC(Q) in missing-data problems. An AIDS data set also is presented to illustrate the proposed methodology.
Astrophysical Model Selection in Gravitational Wave Astronomy
Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.
2012-01-01
Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.
On Optimal Input Design and Model Selection for Communication Channels
Energy Technology Data Exchange (ETDEWEB)
Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL
2013-01-01
In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.
Model and Variable Selection Procedures for Semiparametric Time Series Regression
Directory of Open Access Journals (Sweden)
Risa Kato
2009-01-01
Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.
Gruzdkov, A A; Gromova, L V; Dmitrieva, Yu V; Alekseeva, A S
2015-06-01
The aim of the work is to analyze the relationship between consumption of glucose solution by rats and its absorption, and to use this fact for assessment of the absorptive capacity of the small intestine in non anesthetized animals in vivo. Consumption of glucose solution (200 g/l) by fasted rats was recorded in the control, and after administration of phloridzin--inhibitor of glucose active transport- or 3 hours after the restriction stress. On the mathematical model we studied the relative role of factors that can influence the temporal dynamics of glucose consumption by rats. The rate of glucose consumption was observed being decreased in the presence of phloridzin (1 mM), and be increased after the stress. The results of modeling are consistent with the experimental data and show that the rate of consumption of glucose solutions considerably more depends on the transport activity of the small intestine than on glucose concentration in the solution, or on the substrate regulation of the stomach emptying. Analysis of dynamics of consumption of glucose solution by intact rats may be considered as one of promising approaches to assessing the absorptive capacity of the small intestine under natural conditions.
Lost-sales inventory systems with a service level criterion
Bijvank, Marco; Vis, Iris F. A.
2012-01-01
Competitive retail environments are characterized by service levels and lost sales in case of excess demand. We contribute to research on lost-sales models with a service level criterion in multiple ways. First, we study the optimal replenishment policy for this type of inventory system as well as
Using multilevel models to quantify heterogeneity in resource selection
Wagner, T.; Diefenbach, D.R.; Christensen, S.A.; Norton, A.S.
2011-01-01
Models of resource selection are being used increasingly to predict or model the effects of management actions rather than simply quantifying habitat selection. Multilevel, or hierarchical, models are an increasingly popular method to analyze animal resource selection because they impose a relatively weak stochastic constraint to model heterogeneity in habitat use and also account for unequal sample sizes among individuals. However, few studies have used multilevel models to model coefficients as a function of predictors that may influence habitat use at different scales or quantify differences in resource selection among groups. We used an example with white-tailed deer (Odocoileus virginianus) to illustrate how to model resource use as a function of distance to road that varies among deer by road density at the home range scale. We found that deer avoidance of roads decreased as road density increased. Also, we used multilevel models with sika deer (Cervus nippon) and white-tailed deer to examine whether resource selection differed between species. We failed to detect differences in resource use between these two species and showed how information-theoretic and graphical measures can be used to assess how resource use may have differed. Multilevel models can improve our understanding of how resource selection varies among individuals and provides an objective, quantifiable approach to assess differences or changes in resource selection. ?? The Wildlife Society, 2011.
He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2015-03-01
Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.
Python Program to Select HII Region Models
Miller, Clare; Lamarche, Cody; Vishwas, Amit; Stacey, Gordon J.
2016-01-01
HII regions are areas of singly ionized Hydrogen formed by the ionizing radiaiton of upper main sequence stars. The infrared fine-structure line emissions, particularly Oxygen, Nitrogen, and Neon, can give important information about HII regions including gas temperature and density, elemental abundances, and the effective temperature of the stars that form them. The processes involved in calculating this information from observational data are complex. Models, such as those provided in Rubin 1984 and those produced by Cloudy (Ferland et al, 2013) enable one to extract physical parameters from observational data. However, the multitude of search parameters can make sifting through models tedious. I digitized Rubin's models and wrote a Python program that is able to take observed line ratios and their uncertainties and find the Rubin or Cloudy model that best matches the observational data. By creating a Python script that is user friendly and able to quickly sort through models with a high level of accuracy, this work increases efficiency and reduces human error in matching HII region models to observational data.
Methods for model selection in applied science and engineering.
Energy Technology Data Exchange (ETDEWEB)
Field, Richard V., Jr.
2004-10-01
Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be
Bayesian Model Selection for LISA Pathfinder
Karnesis, Nikolaos; Sopuerta, Carlos F; Gibert, Ferran; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Ferraioli, Luigi; Hewitson, Martin; Hueller, Mauro; Korsakova, Natalia; Plagnol, Eric; Vitale, and Stefano
2013-01-01
The main goal of the LISA Pathfinder (LPF) mission is to fully characterize the acceleration noise models and to test key technologies for future space-based gravitational-wave observatories similar to the LISA/eLISA concept. The Data Analysis (DA) team has developed complex three-dimensional models of the LISA Technology Package (LTP) experiment on-board LPF. These models are used for simulations, but more importantly, they will be used for parameter estimation purposes during flight operations. One of the tasks of the DA team is to identify the physical effects that contribute significantly to the properties of the instrument noise. A way of approaching to this problem is to recover the essential parameters of the LTP which describe the data. Thus, we want to define the simplest model that efficiently explains the observations. To do so, adopting a Bayesian framework, one has to estimate the so-called Bayes Factor between two competing models. In our analysis, we use three main different methods to estimate...
Control structure selection for vapor compression refrigeration cycle
Energy Technology Data Exchange (ETDEWEB)
Yin, Xiaohong; Li, Shaoyuan [Shanghai Jiao Tong Univ., Shanghai (China). Dept. of Automation; Shandong Jianzhu Univ., Jinan (China). School of Information and Electrical Engineering; Cai, Wenjian; Ding, Xudong [Nanyang Technological Univ., Singapore (Singapore). School of Electrical and Electronic Engineering
2013-07-01
A control structure selection criterion which can be used to evaluate the control performance of different control structures for the vapor compression refrigeration cycle is proposed in this paper. The calculation results of the proposed criterion based on the different reduction models are utilized to determine the optimized control model structure. The effectiveness of the criterion is verified by the control effects of the model predictive control (MPC) controllers which are designed based on different model structures. The response of the different controllers applied on the actual vapor compression refrigeration system indicate that the best model structure is in consistent with the one obtained by the proposed structure selection criterion which is a trade-off between computation complexity and control performance.
Model selection in kernel ridge regression
DEFF Research Database (Denmark)
Exterkate, Peter
2013-01-01
Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...
Model Selection in Kernel Ridge Regression
DEFF Research Database (Denmark)
Exterkate, Peter
Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...
Development of SPAWM: selection program for available watershed models.
Cho, Yongdeok; Roesner, Larry A
2014-01-01
A selection program for available watershed models (also known as SPAWM) was developed. Thirty-three commonly used watershed models were analyzed in depth and classified in accordance to their attributes. These attributes consist of: (1) land use; (2) event or continuous; (3) time steps; (4) water quality; (5) distributed or lumped; (6) subsurface; (7) overland sediment; and (8) best management practices. Each of these attributes was further classified into sub-attributes. Based on user selected sub-attributes, the most appropriate watershed model is selected from the library of watershed models. SPAWM is implemented using Excel Visual Basic and is designed for use by novices as well as by experts on watershed modeling. It ensures that the necessary sub-attributes required by the user are captured and made available in the selected watershed model.
Gerretzen, Jan; Szymańska, Ewa; Bart, Jacob; Davies, Antony N; van Manen, Henk-Jan; van den Heuvel, Edwin R; Jansen, Jeroen J; Buydens, Lutgarde M C
2016-09-28
The aim of data preprocessing is to remove data artifacts-such as a baseline, scatter effects or noise-and to enhance the contextually relevant information. Many preprocessing methods exist to deliver one or more of these benefits, but which method or combination of methods should be used for the specific data being analyzed is difficult to select. Recently, we have shown that a preprocessing selection approach based on Design of Experiments (DoE) enables correct selection of highly appropriate preprocessing strategies within reasonable time frames. In that approach, the focus was solely on improving the predictive performance of the chemometric model. This is, however, only one of the two relevant criteria in modeling: interpretation of the model results can be just as important. Variable selection is often used to achieve such interpretation. Data artifacts, however, may hamper proper variable selection by masking the true relevant variables. The choice of preprocessing therefore has a huge impact on the outcome of variable selection methods and may thus hamper an objective interpretation of the final model. To enhance such objective interpretation, we here integrate variable selection into the preprocessing selection approach that is based on DoE. We show that the entanglement of preprocessing selection and variable selection not only improves the interpretation, but also the predictive performance of the model. This is achieved by analyzing several experimental data sets of which the true relevant variables are available as prior knowledge. We show that a selection of variables is provided that complies more with the true informative variables compared to individual optimization of both model aspects. Importantly, the approach presented in this work is generic. Different types of models (e.g. PCR, PLS, …) can be incorporated into it, as well as different variable selection methods and different preprocessing methods, according to the taste and experience of
Quantile hydrologic model selection and model structure deficiency assessment: 2. Applications
Pande, S.
2013-01-01
Quantile hydrologic model selection and structure deficiency assessment is applied in three case studies. The performance of quantile model selection problem is rigorously evaluated using a model structure on the French Broad river basin data set. The case study shows that quantile model selection
The genealogy of samples in models with selection.
Neuhauser, C; Krone, S M
1997-02-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.
Adapting AIC to conditional model selection
M. van Ommen (Matthijs)
2012-01-01
textabstractIn statistical settings such as regression and time series, we can condition on observed information when predicting the data of interest. For example, a regression model explains the dependent variables $y_1, \\ldots, y_n$ in terms of the independent variables $x_1, \\ldots, x_n$.
Random effect selection in generalised linear models
DEFF Research Database (Denmark)
Denwood, Matt; Houe, Hans; Forkman, Björn;
We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...
A Decision Model for Selecting Participants in Supply Chain
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
In order to satisfy the rapid changing requirements of customers, enterprises must cooperate with each other to form supply chain. The first and the most important stage in the forming of supply chain is the selection of participants. The article proposes a two-staged decision model to select partners. The first stage is the inter company comparison in each business process to select highefficiency candidate based on inside variables. The next stage is to analyse the combination of different candidates in order to select the most perfect partners according to a goal-programming model.
Energy Technology Data Exchange (ETDEWEB)
Miekina, A.; Morawski, Roman Z, E-mail: r.morawski@ire.pw.edu.p [Warsaw University of Technology, Faculty of Electronics and Information Technology, Institute of Radioelectronics, Warsaw (Poland)
2010-07-01
The spectrophotometric analysis of oil mixtures, containing olive oil, is the subject of this paper. Its objective is to propose and evaluate a new method for wavelength selection aimed at optimisation of the numerical conditioning of the problem of determination of a selected component of such a mixture. The performance of the proposed methodology is assessed using semi-synthetic data and a criterion related to measurement uncertainty.
Zhao, Xiaozhao; Hou, Yuexian; Song, Dawei; Li, Wenjie
2017-03-16
Typical dimensionality reduction (DR) methods are data-oriented, focusing on directly reducing the number of random variables (or features) while retaining the maximal variations in the high-dimensional data. Targeting unsupervised situations, this paper aims to address the problem from a novel perspective and considers model-oriented DR in parameter spaces of binary multivariate distributions. Specifically, we propose a general parameter reduction criterion, called confident-information-first (CIF) principle, to maximally preserve confident parameters and rule out less confident ones. Formally, the confidence of each parameter can be assessed by its contribution to the expected Fisher information distance within a geometric manifold over the neighborhood of the underlying real distribution. Then, we demonstrate two implementations of CIF in different scenarios. First, when there are no observed samples, we revisit the Boltzmann machines (BMs) from a model selection perspective and theoretically show that both the fully visible BM and the BM with hidden units can be derived from the general binary multivariate distribution using the CIF principle. This finding would help us uncover and formalize the essential parts of the target density that BM aims to capture and the nonessential parts that BM should discard. Second, when there exist observed samples, we apply CIF to the model selection for BM, which is in turn made adaptive to the observed samples. The sample-specific CIF is a heuristic method to decide the priority order of parameters, which can improve the search efficiency without degrading the quality of model selection results as shown in a series of density estimation experiments.
Modeling HIV-1 drug resistance as episodic directional selection.
Directory of Open Access Journals (Sweden)
Ben Murrell
Full Text Available The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.
Asset pricing model selection: Indonesian Stock Exchange
Pasaribu, Rowland Bismark Fernando
2010-01-01
The Capital Asset Pricing Model (CAPM) has dominated finance theory for over thirty years; it suggests that the market beta alone is sufficient to explain stock returns. However evidence shows that the cross-section of stock returns cannot be described solely by the one-factor CAPM. Therefore, the idea is to add other factors in order to complete the beta in explaining the price movements in the stock exchange. The Arbitrage Pricing Theory (APT) has been proposed as the first multifactor succ...
A mixed model reduction method for preserving selected physical information
Zhang, Jing; Zheng, Gangtie
2017-03-01
A new model reduction method in the frequency domain is presented. By mixedly using the model reduction techniques from both the time domain and the frequency domain, the dynamic model is condensed to selected physical coordinates, and the contribution of slave degrees of freedom is taken as a modification to the model in the form of effective modal mass of virtually constrained modes. The reduced model can preserve the physical information related to the selected physical coordinates such as physical parameters and physical space positions of corresponding structure components. For the cases of non-classical damping, the method is extended to the model reduction in the state space but still only contains the selected physical coordinates. Numerical results are presented to validate the method and show the effectiveness of the model reduction.
Two-step variable selection in quantile regression models
Directory of Open Access Journals (Sweden)
FAN Yali
2015-06-01
Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions,in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform l1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.
Hou, Yanqing; Verhagen, Sandra; Wu, Jie
2016-12-01
Ambiguity Resolution (AR) is a key technique in GNSS precise positioning. In case of weak models (i.e., low precision of data), however, the success rate of AR may be low, which may consequently introduce large errors to the baseline solution in cases of wrong fixing. Partial Ambiguity Resolution (PAR) is therefore proposed such that the baseline precision can be improved by fixing only a subset of ambiguities with high success rate. This contribution proposes a new PAR strategy, allowing to select the subset such that the expected precision gain is maximized among a set of pre-selected subsets, while at the same time the failure rate is controlled. These pre-selected subsets are supposed to obtain the highest success rate among those with the same subset size. The strategy is called Two-step Success Rate Criterion (TSRC) as it will first try to fix a relatively large subset with the fixed failure rate ratio test (FFRT) to decide on acceptance or rejection. In case of rejection, a smaller subset will be fixed and validated by the ratio test so as to fulfill the overall failure rate criterion. It is shown how the method can be practically used, without introducing a large additional computation effort. And more importantly, how it can improve (or at least not deteriorate) the availability in terms of baseline precision comparing to classical Success Rate Criterion (SRC) PAR strategy, based on a simulation validation. In the simulation validation, significant improvements are obtained for single-GNSS on short baselines with dual-frequency observations. For dual-constellation GNSS, the improvement for single-frequency observations on short baselines is very significant, on average 68%. For the medium- to long baselines, with dual-constellation GNSS the average improvement is around 20-30%.
Selection of probability based weighting models for Boolean retrieval system
Energy Technology Data Exchange (ETDEWEB)
Ebinuma, Y. (Japan Atomic Energy Research Inst., Tokai, Ibaraki. Tokai Research Establishment)
1981-09-01
Automatic weighting models based on probability theory were studied if they can be applied to boolean search logics including logical sum. The INIS detabase was used for searching of one particular search formula. Among sixteen models three with good ranking performance were selected. These three models were further applied to searching of nine search formulas in the same database. It was found that two models among them show slightly better average ranking performance while the other model, the simplest one, seems also practical.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Sensitivity of resource selection and connectivity models to landscape definition
Katherine A. Zeller; Kevin McGarigal; Samuel A. Cushman; Paul Beier; T. Winston Vickers; Walter M. Boyce
2017-01-01
Context: The definition of the geospatial landscape is the underlying basis for species-habitat models, yet sensitivity of habitat use inference, predicted probability surfaces, and connectivity models to landscape definition has received little attention. Objectives: We evaluated the sensitivity of resource selection and connectivity models to four landscape...
A Working Model of Natural Selection Illustrated by Table Tennis
Dinc, Muhittin; Kilic, Selda; Aladag, Caner
2013-01-01
Natural selection is one of the most important topics in biology and it helps to clarify the variety and complexity of organisms. However, students in almost every stage of education find it difficult to understand the mechanism of natural selection and they can develop misconceptions about it. This article provides an active model of natural…
Elementary Teachers' Selection and Use of Visual Models
Lee, Tammy D.; Gail Jones, M.
2017-07-01
As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.
A Robust Supply Chain Coordination Model based on Minimax Regret Criterion%基于最小最大后悔值准则的供应链鲁棒协调模型
Institute of Scientific and Technical Information of China (English)
邱若臻; 黄小原
2011-01-01
A supply chain robust coordination model based on buyback contract is developed for the two-stage supply chain system with unknown demand distribution. The robust order policy for integrated supply chain, together with the robust contract coordination policy for decentralized supply chain, based on minimax regret criterion, is proposed by using robust optimization with only support information of demand. The regrets of a supply chain system and its members who do not operating optimally for lack of information are analyzed under different service levels and contract parameters. Lastly, numerical example is used to evaluate the optimal decision and robust decision based on minimax regret criterion, and to verify the effect of supply chain robust buyback contract coordination strategy under different demand distribution. The results show that supply chain coordination strategy with buyback contract based on minimax regret criterion has strong robustness and can reduce the impact of demand uncertainty on the performance of supply chain system and its members effectively.%研究了需求不确定条件下,基于最小最大后悔值准则的供应链鲁棒回购契约协调问题.针对未知需求具体分布形式的两级供应链系统,建立了供应链鲁棒回购契约协调模型.在仅知需求区间这一信息条件下,采用鲁棒优化方法求解了最小最大后悔值准则下的集成供应链鲁棒订货策略和分散供应链鲁棒契约协调策略.分析了不同服务水平和契约参数条件下,由于信息缺失而未能实现最优运作的供应链及其成员后悔值情况.最后,进行了数值计算,验证了不同需求分布形式假设下的最优决策与基于最小最大后悔值准则的鲁棒决策的优劣性,以及供应链鲁棒回购契约协调策略的有效性.结果表明,基于最小最大后悔值准则的供应链回购契约协调策略具有良好的鲁棒性,并且能够有效减少需求不确定性对系统及其成员运作绩效的影响.
Fluctuating selection models and McDonald-Kreitman type analyses.
Directory of Open Access Journals (Sweden)
Toni I Gossmann
Full Text Available It is likely that the strength of selection acting upon a mutation varies through time due to changes in the environment. However, most population genetic theory assumes that the strength of selection remains constant. Here we investigate the consequences of fluctuating selection pressures on the quantification of adaptive evolution using McDonald-Kreitman (MK style approaches. In agreement with previous work, we show that fluctuating selection can generate evidence of adaptive evolution even when the expected strength of selection on a mutation is zero. However, we also find that the mutations, which contribute to both polymorphism and divergence tend, on average, to be positively selected during their lifetime, under fluctuating selection models. This is because mutations that fluctuate, by chance, to positive selected values, tend to reach higher frequencies in the population than those that fluctuate towards negative values. Hence the evidence of positive adaptive evolution detected under a fluctuating selection model by MK type approaches is genuine since fixed mutations tend to be advantageous on average during their lifetime. Never-the-less we show that methods tend to underestimate the rate of adaptive evolution when selection fluctuates.
The Optimal Portfolio Selection Model under g -Expectation
National Research Council Canada - National Science Library
Li Li
2014-01-01
This paper solves the optimal portfolio selection model under the framework of the prospect theory proposed by Kahneman and Tversky in the 1970s with decision rule replaced by the g -expectation introduced by Peng...
Robust Decision-making Applied to Model Selection
Energy Technology Data Exchange (ETDEWEB)
Hemez, Francois M. [Los Alamos National Laboratory
2012-08-06
The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.
Sensor Optimization Selection Model Based on Testability Constraint
Institute of Scientific and Technical Information of China (English)
YANG Shuming; QIU Jing; LIU Guanjun
2012-01-01
Sensor selection and optimization is one of the important parts in design for testability.To address the problems that the traditional sensor optimization selection model does not take the requirements of prognostics and health management especially fault prognostics for testability into account and does not consider the impacts of sensor actual attributes on fault detectability,a novel sensor optimization selection model is proposed.Firstly,a universal architecture for sensor selection and optimization is provided.Secondly,a new testability index named fault predictable rate is defined to describe fault prognostics requirements for testability.Thirdly,a sensor selection and optimization model for prognostics and health management is constructed,which takes sensor cost as objective finction and the defined testability indexes as constraint conditions.Due to NP-hard property of the model,a generic algorithm is designed to obtain the optimal solution.At last,a case study is presented to demonstrate the sensor selection approach for a stable tracking servo platform.The application results and comparison analysis show the proposed model and algorithm are effective and feasible.This approach can be used to select sensors for prognostics and health management of any system.
SELECTION MOMENTS AND GENERALIZED METHOD OF MOMENTS FOR HETEROSKEDASTIC MODELS
Directory of Open Access Journals (Sweden)
Constantin ANGHELACHE
2016-06-01
Full Text Available In this paper, the authors describe the selection methods for moments and the application of the generalized moments method for the heteroskedastic models. The utility of GMM estimators is found in the study of the financial market models. The selection criteria for moments are applied for the efficient estimation of GMM for univariate time series with martingale difference errors, similar to those studied so far by Kuersteiner.
Modeling Suspicious Email Detection using Enhanced Feature Selection
2013-01-01
The paper presents a suspicious email detection model which incorporates enhanced feature selection. In the paper we proposed the use of feature selection strategies along with classification technique for terrorists email detection. The presented model focuses on the evaluation of machine learning algorithms such as decision tree (ID3), logistic regression, Na\\"ive Bayes (NB), and Support Vector Machine (SVM) for detecting emails containing suspicious content. In the literature, various algo...
RUC at TREC 2014: Select Resources Using Topic Models
2014-11-01
them being observed (i.e. sampled). To infer the topic Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the...Selection. In CIKM 2009, pages 1277-1286. [10] M. Baillie, M. Carmen, and F. Crestani. A Multiple- Collection Latent Topic Model for Federated...RUC at TREC 2014: Select Resources Using Topic Models Qiuyue Wang, Shaochen Shi, Wei Cao School of Information Renmin University of China Beijing
Trainable unit selection speech synthesis under statistical framework
Institute of Scientific and Technical Information of China (English)
WANG RenHua; DAI LiRong; LING ZhenHua; HU Yu
2009-01-01
This paper proposes a trainable unit selection speech synthesis method based on statistical modeling framework. At training stage, acoustic features are extracted from the training database and statistical models are estimated for each feature. During synthesis, the optimal candidate unit sequence is searched out from the database following the maximum likelihood criterion derived from the trained models. Finally, the waveforms of the optimal candidate units are concatenated to produce synthetic speech. Experiment results show that this method can improve the automation of system construction and naturalness of synthetic speech effectively compared with the conventional unit selection synthe-sis method. Furthermore, this paper presents a minimum unit selection error model training criterion according to the characteristics of unit selection speech synthesis and adopts discriminative training for model parameter estimation. This criterion can finally achieve the full automation of system con-struction and improve the naturalness of synthetic speech further.
A guide to Bayesian model selection for ecologists
Hooten, Mevin B.; Hobbs, N.T.
2015-01-01
The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.
Selection of an appropriately simple storm runoff model
Directory of Open Access Journals (Sweden)
A. I. J. M. van Dijk
2009-09-01
Full Text Available Alternative conceptual storm runoff models, including several published ones, were evaluated against storm flow time series for 260 catchments in Australia (23–1902 km^{2}. The original daily streamflow data was separated into baseflow and storm flow components and from these, event rainfall and storm flow totals were estimated. For each tested model structure, the number of free parameters was reduced in stages. The appropriate balance between simplicity and explanatory power was decided based on Aikake's Final Prediction Error Criterion and evidence of parameter equivalence. The majority of catchments showed storm recession half-times in the order of a day, with more rapid drainage in dry catchments. Overland and channel travel time did not appear to be an important driver of storm flow recession. A storm runoff model with two free parameters (one related to storm event size, the other to antecedent baseflow and a fixed initial loss of 12 mm provided the optimal model structure. The optimal model had some features similar to the Soil Conservation Service Curve Number technique, but performed an average 12 to 19% better. The non-linear relationship between event rainfall and event runoff may be associated with saturated area expansion during storms and/or the relationship between storm event size and peak rainfall intensity. Antecedent baseflow was a strong predictor of runoff response. A simple conceptual relationship between groundwater storage and saturated catchment area proved adequate and produced realistic estimates of saturated area of <0.1% for the driest and >5% for the wettest catchments.
The Use of Evolution in a Central Action Selection Model
Directory of Open Access Journals (Sweden)
F. Montes-Gonzalez
2007-01-01
Full Text Available The use of effective central selection provides flexibility in design by offering modularity and extensibility. In earlier papers we have focused on the development of a simple centralized selection mechanism. Our current goal is to integrate evolutionary methods in the design of non-sequential behaviours and the tuning of specific parameters of the selection model. The foraging behaviour of an animal robot (animat has been modelled in order to integrate the sensory information from the robot to perform selection that is nearly optimized by the use of genetic algorithms. In this paper we present how selection through optimization finally arranges the pattern of presented behaviours for the foraging task. Hence, the execution of specific parts in a behavioural pattern may be ruled out by the tuning of these parameters. Furthermore, the intensive use of colour segmentation from a colour camera for locating a cylinder sets a burden on the calculations carried out by the genetic algorithm.
Partner Selection Optimization Model of Agricultural Enterprises in Supply Chain
Directory of Open Access Journals (Sweden)
Feipeng Guo
2013-10-01
Full Text Available With more and more importance of correctly selecting partners in supply chain of agricultural enterprises, a large number of partner evaluation techniques are widely used in the field of agricultural science research. This study established a partner selection model to optimize the issue of agricultural supply chain partner selection. Firstly, it constructed a comprehensive evaluation index system after analyzing the real characteristics of agricultural supply chain. Secondly, a heuristic method for attributes reduction based on rough set theory and principal component analysis was proposed which can reduce multiple attributes into some principal components, yet retaining effective evaluation information. Finally, it used improved BP neural network which has self-learning function to select partners. The empirical analysis on an agricultural enterprise shows that this model is effective and feasible for practical partner selection.
A Hybrid Multiple Criteria Decision Making Model for Supplier Selection
Directory of Open Access Journals (Sweden)
Chung-Min Wu
2013-01-01
Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.
Sensor Calibration Design Based on D-Optimality Criterion
Directory of Open Access Journals (Sweden)
Hajiyev Chingiz
2016-09-01
Full Text Available In this study, a procedure for optimal selection of measurement points using the D-optimality criterion to find the best calibration curves of measurement sensors is proposed. The coefficients of calibration curve are evaluated by applying the classical Least Squares Method (LSM. As an example, the problem of optimal selection for standard pressure setters when calibrating a differential pressure sensor is solved. The values obtained from the D-optimum measurement points for calibration of the differential pressure sensor are compared with those from actual experiments. Comparison of the calibration errors corresponding to the D-optimal, A-optimal and Equidistant calibration curves is done.
A complete generalized adjustment criterion
Perković, Emilija; Textor, Johannes; Kalisch, Markus; Maathuis, Marloes H.
2015-01-01
Covariate adjustment is a widely used approach to estimate total causal effects from observational data. Several graphical criteria have been developed in recent years to identify valid covariates for adjustment from graphical causal models. These criteria can handle multiple causes, latent confound
Statistical model selection with “Big Data”
Directory of Open Access Journals (Sweden)
Jurgen A. Doornik
2015-12-01
Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.
Selection Bias in Educational Transition Models: Theory and Empirical Evidence
DEFF Research Database (Denmark)
Holm, Anders; Jæger, Mads
Most studies using Mare’s (1980, 1981) seminal model of educational transitions find that the effect of family background decreases across transitions. Recently, Cameron and Heckman (1998, 2001) have argued that the “waning coefficients” in the Mare model are driven by selection on unobserved...... the United States, United Kingdom, Denmark, and the Netherlands shows that when we take selection into account the effect of family background variables on educational transitions is largely constant across transitions. We also discuss several difficulties in estimating educational transition models which...... variables. This paper, first, explains theoretically how selection on unobserved variables leads to waning coefficients and, second, illustrates empirically how selection leads to biased estimates of the effect of family background on educational transitions. Our empirical analysis using data from...
Multicriteria framework for selecting a process modelling language
Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel
2016-01-01
The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.
Institute of Scientific and Technical Information of China (English)
袁小平; 刘红岩; 王志乔
2012-01-01
Hardening and softening properties of the materials is described by introducing plastic internal variables to hardening function in most softening constitutive models of rock, without considering the damage effects of micro-crack growth and different performances of initial yield strength Jo and yield limit/, between the uniaxial tensile and compressive loadings. Plastic yield criterion is used simultaneously with the damage criteria to simulate the physical behavior of rock-like materials based on D-P criterion, and elastoplastio damage constitutive model with its numerical algorithm is also proposed. Borja's hardening/softening strain function is employed as the plastic yield function, indicating the plastic internal variables and stress states are two important factors to the hardening function. Volume expansion caused by micro-crack growth is responsible for rock damage evolution D, which can be characterized by proposed function of volumetric strain. The code of elastoplastic damage constitutive model of rock is implemented using return mapping implicit integration algorithm. The proposed model is used to uniaxial tensile and compressive tests and the results agree well with the characteristics of rock-like materials and experimental curves.%大多数岩石材料软化本构模型在硬化函数中引入塑性内变量来表示材料的硬化/软化性质,但并不能反映岩石微裂隙损伤对材料力学性能的影响及单轴拉伸和压缩所表现的初始屈服强度f0与屈服极限fu的差异.基于D-P准则同时考虑塑性软化及损伤软化,建立岩石类材料的弹塑性本构关系及其数值算法.塑性屈服函数采用Borja等的应力张量的硬化/软化函数,反映塑性内变量及应力状态对硬化函数的影响；由于岩石损伤软化是微裂隙扩展所导致的体积膨胀引起的,因此,提出用体积应变表征岩石损伤变量的演化,并用回映隐式积分算法编制了岩石的弹塑性损伤本构
Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romanach, Stephanie; Watling, James I.; Mazzotti, Frank J.
2017-01-01
Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using
Models of microbiome evolution incorporating host and microbial selection.
Zeng, Qinglong; Wu, Steven; Sukumaran, Jeet; Rodrigo, Allen
2017-09-25
Numerous empirical studies suggest that hosts and microbes exert reciprocal selective effects on their ecological partners. Nonetheless, we still lack an explicit framework to model the dynamics of both hosts and microbes under selection. In a previous study, we developed an agent-based forward-time computational framework to simulate the neutral evolution of host-associated microbial communities in a constant-sized, unstructured population of hosts. These neutral models allowed offspring to sample microbes randomly from parents and/or from the environment. Additionally, the environmental pool of available microbes was constituted by fixed and persistent microbial OTUs and by contributions from host individuals in the preceding generation. In this paper, we extend our neutral models to allow selection to operate on both hosts and microbes. We do this by constructing a phenome for each microbial OTU consisting of a sample of traits that influence host and microbial fitnesses independently. Microbial traits can influence the fitness of hosts ("host selection") and the fitness of microbes ("trait-mediated microbial selection"). Additionally, the fitness effects of traits on microbes can be modified by their hosts ("host-mediated microbial selection"). We simulate the effects of these three types of selection, individually or in combination, on microbiome diversities and the fitnesses of hosts and microbes over several thousand generations of hosts. We show that microbiome diversity is strongly influenced by selection acting on microbes. Selection acting on hosts only influences microbiome diversity when there is near-complete direct or indirect parental contribution to the microbiomes of offspring. Unsurprisingly, microbial fitness increases under microbial selection. Interestingly, when host selection operates, host fitness only increases under two conditions: (1) when there is a strong parental contribution to microbial communities or (2) in the absence of a strong
Testing exclusion restrictions and additive separability in sample selection models
DEFF Research Database (Denmark)
Huber, Martin; Mellace, Giovanni
2014-01-01
Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction of these......Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction...... of these assumptions by applying the approach of Huber and Mellace (Testing instrument validity for LATE identification based on inequality moment constraints, 2011) (for testing instrument validity under treatment endogeneity) to the sample selection framework. We show that the exclusion restriction and additive...... separability imply two testable inequality constraints that come from both point identifying and bounding the outcome distribution of the subpopulation that is always selected/observed. We apply the tests to two variables for which the exclusion restriction is frequently invoked in female wage regressions: non...
Periodic Integration: Further Results on Model Selection and Forecasting
Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)
1996-01-01
textabstractThis paper considers model selection and forecasting issues in two closely related models for nonstationary periodic autoregressive time series [PAR]. Periodically integrated seasonal time series [PIAR] need a periodic differencing filter to remove the stochastic trend. On the other
Institute of Scientific and Technical Information of China (English)
苗胜军; 杨志军; 龙超; 谭文辉
2013-01-01
为探究适应脆性硬岩加载破坏的强度准则,以杏山铁矿混合花岗岩为对象,根据室内试验获得岩石物理力学参数及岩样切片扫描图,基于颗粒流理论和PFC程序建立混合花岗岩颗粒细观几何模型,采用Fish语言编制加载命令流并调整相应函数,对岩石单轴和三轴(s3=40 MPa)刚性加载试验进行模拟.通过岩石全应力–应变试验与模拟曲线、AE声发射与裂纹监测成果等综合比较研究,获得荷载作用下混合花岗岩细观力学特性及微宏观破裂演化规律.在此基础上,结合岩石单轴刚性加载试验曲线,裂纹数、摩擦力能量与轴向应变关系曲线,以及FLAC模拟,对脆性硬岩黏聚力弱化–摩擦力强化(CWFS)强度准则模型参数进行优化研究与验证.获得杏山铁矿混合花岗岩CWFS强度准则模型参数：初始黏聚力为23 MPa,残余黏聚力为4.3 MPa,初始摩擦角为0°,残余摩擦角为46.3°,临界塑性应变 pce, pfe分别为0.0015,0.0037.该研究成果对杏山铁矿露天转地下开采围岩体破坏机制和力学本构关系研究和工程稳定性分析等具有重要的意义.%It takes migmatitic granite of Xingshan iron mine as the object to study the failure criterion of brittle hard rock. According to the physical and mechanical parameters from laboratory experiments and grains section scanning,micro-geometric model of migmatitic granite grains were built based on particle flow theory and PFC programs,loading codes were developed and some functions were adjusted by Fish language to simulate uniaxial and triaxial( 3s = 40 MPa) compression experiments. Based on comprehensive comparison and analysis of the complete stress-strain curves,acoustic emission and“crack”monitoring results from laboratory and simulation, the micro-mechanical characteristics and the cracks revolution laws from microcosm to macrocosm of migmatitic granite were obtained. On this condition,cohesion weakening and
Quantile hydrologic model selection and model structure deficiency assessment: 1. Theory
Pande, S.
2013-01-01
A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies
Quantile hydrologic model selection and model structure deficiency assessment: 1. Theory
Pande, S.
2013-01-01
A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies structur
AN EXPERT SYSTEM MODEL FOR THE SELECTION OF TECHNICAL PERSONNEL
Directory of Open Access Journals (Sweden)
Emine COŞGUN
2005-03-01
Full Text Available In this study, a model has been developed for the selection of the technical personnel. In the model Visual Basic has been used as user interface, Microsoft Access has been utilized as database system and CLIPS program has been used as expert system program. The proposed model has been developed by utilizing expert system technology. In the personnel selection process, only the pre-evaluation of the applicants has been taken into consideration. Instead of replacing the expert himself, a decision support program has been developed to analyze the data gathered from the job application forms. The attached study will assist the expert to make faster and more accurate decisions.
Novel web service selection model based on discrete group search.
Zhai, Jie; Shao, Zhiqing; Guo, Yi; Zhang, Haiteng
2014-01-01
In our earlier work, we present a novel formal method for the semiautomatic verification of specifications and for describing web service composition components by using abstract concepts. After verification, the instantiations of components were selected to satisfy the complex service performance constraints. However, selecting an optimal instantiation, which comprises different candidate services for each generic service, from a large number of instantiations is difficult. Therefore, we present a new evolutionary approach on the basis of the discrete group search service (D-GSS) model. With regard to obtaining the optimal multiconstraint instantiation of the complex component, the D-GSS model has competitive performance compared with other service selection models in terms of accuracy, efficiency, and ability to solve high-dimensional service composition component problems. We propose the cost function and the discrete group search optimizer (D-GSO) algorithm and study the convergence of the D-GSS model through verification and test cases.
Keily, Jack; MacGregor, Dana R; Smith, Robert W; Millar, Andrew J; Halliday, Karen J; Penfield, Steven
2013-10-01
Circadian clocks confer advantages by restricting biological processes to certain times of day through the control of specific phased outputs. Control of temperature signalling is an important function of the plant oscillator, but the architecture of the gene network controlling cold signalling by the clock is not well understood. Here we use a model ensemble fitted to time-series data and a corrected Akaike Information Criterion (AICc) analysis to extend a dynamic model to include the control of the key cold-regulated transcription factors C-REPEAT BINDING FACTORs 1-3 (CBF1, CBF2, CBF3). AICc was combined with in silico analysis of genetic perturbations in the model ensemble, and selected a model that predicted mutant phenotypes and connections between evening-phased circadian clock components and CBF3 transcriptional control, but these connections were not shared by CBF1 and CBF2. In addition, our model predicted the correct gating of CBF transcription by cold only when the cold signal originated from the clock mechanism itself, suggesting that the clock has an important role in temperature signal transduction. Our data shows that model selection could be a useful method for the expansion of gene network models. © 2013 The Authors The Plant Journal © 2013 John Wiley & Sons Ltd.
Selection of climate change scenario data for impact modelling
DEFF Research Database (Denmark)
Sloth Madsen, M; Fox Maule, C; MacKellar, N
2012-01-01
Impact models investigating climate change effects on food safety often need detailed climate data. The aim of this study was to select climate change projection data for selected crop phenology and mycotoxin impact models. Using the ENSEMBLES database of climate model output, this study...... illustrates how the projected climate change signal of important variables as temperature, precipitation and relative humidity depends on the choice of the climate model. Using climate change projections from at least two different climate models is recommended to account for model uncertainty. To make...... the climate projections suitable for impact analysis at the local scale a weather generator approach was adopted. As the weather generator did not treat all the necessary variables, an ad-hoc statistical method was developed to synthesise realistic values of missing variables. The method is presented...
Fuzzy MCDM Model for Risk Factor Selection in Construction Projects
Directory of Open Access Journals (Sweden)
Pejman Rezakhani
2012-11-01
Full Text Available Risk factor selection is an important step in a successful risk management plan. There are many risk factors in a construction project and by an effective and systematic risk selection process the most critical risks can be distinguished to have more attention. In this paper through a comprehensive literature survey, most significant risk factors in a construction project are classified in a hierarchical structure. For an effective risk factor selection, a modified rational multi criteria decision making model (MCDM is developed. This model is a consensus rule based model and has the optimization property of rational models. By applying fuzzy logic to this model, uncertainty factors in group decision making such as experts` influence weights, their preference and judgment for risk selection criteria will be assessed. Also an intelligent checking process to check the logical consistency of experts` preferences will be implemented during the decision making process. The solution inferred from this method is in the highest degree of acceptance of group members. Also consistency of individual preferences is checked by some inference rules. This is an efficient and effective approach to prioritize and select risks based on decisions made by group of experts in construction projects. The applicability of presented method is assessed through a case study.
A Hybrid Program Projects Selection Model for Nonprofit TV Stations
Directory of Open Access Journals (Sweden)
Kuei-Lun Chang
2015-01-01
Full Text Available This study develops a hybrid multiple criteria decision making (MCDM model to select program projects for nonprofit TV stations on the basis of managers’ perceptions. By the concept of balanced scorecard (BSC and corporate social responsibility (CSR, we collect criteria for selecting the best program project. Fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Next, considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain the weights of them. To avoid calculation and additional pairwise comparisons of ANP, technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. A case study is presented to demonstrate the applicability of the proposed model.
A SUPPLIER SELECTION MODEL FOR SOFTWARE DEVELOPMENT OUTSOURCING
Directory of Open Access Journals (Sweden)
Hancu Lucian-Viorel
2010-12-01
Full Text Available This paper presents a multi-criteria decision making model used for supplier selection for software development outsourcing on e-marketplaces. This model can be used in auctions. The supplier selection process becomes complex and difficult on last twenty years since the Internet plays an important role in business management. Companies have to concentrate their efforts on their core activities and the others activities should be realized by outsourcing. They can achieve significant cost reduction by using e-marketplaces in their purchase process and by using decision support systems on supplier selection. In the literature were proposed many approaches for supplier evaluation and selection process. The performance of potential suppliers is evaluated using multi criteria decision making methods rather than considering a single factor cost.
Adverse Selection Models with Three States of Nature
Directory of Open Access Journals (Sweden)
Daniela MARINESCU
2011-02-01
Full Text Available In the paper we analyze an adverse selection model with three states of nature, where both the Principal and the Agent are risk neutral. When solving the model, we use the informational rents and the efforts as variables. We derive the optimal contract in the situation of asymmetric information. The paper ends with the characteristics of the optimal contract and the main conclusions of the model.
Directory of Open Access Journals (Sweden)
E. Mrabet
2014-01-01
Full Text Available The modal parameters of a structure that is estimated from ambient vibration measurements are always subject to bias and variance errors. Accordingly the concept of the stabilization diagram is introduced to help users identify the correct model. One of the most important problems using this diagram is the appearance of spurious modes that should be discriminated to simplify modes selections. This study presents a new stabilization criterion obtained through a novel numerical implementation of the stabilization diagram and the discussion of model validation employing the power spectral density. As an application, an aircraft skeleton is used.
Bayesian model selection for constrained multivariate normal linear models
Mulder, J.
2010-01-01
The expectations that researchers have about the structure in the data can often be formulated in terms of equality constraints and/or inequality constraints on the parameters in the model that is used. In a (M)AN(C)OVA model, researchers have expectations about the differences between the
Filamentary and hierarchical pictures - Kinetic energy criterion
Klypin, Anatoly A.; Melott, Adrian L.
1992-01-01
We present a new criterion for formation of second-generation filaments. The criterion called the kinetic energy ratio, KR, is based on comparison of peculiar velocities at different scales. We suggest that the clumpiness of the distribution in some cases might be less important than the 'coldness' or 'hotness' of the flow for formation of coherent structures. The kinetic energy ratio is analogous to the Mach number except for one essential difference. If at some scale KR is greater than 1, as estimated at the linear stage, then when fluctuations of this scale reach nonlinearity, the objects they produce must be anisotropic ('filamentary'). In the case of power-law initial spectra the kinetic ratio criterion suggests that the border line is the power-spectrum with the slope n = -1.
Aesthetical criterion in art and science
Milovanović, Miloš
2016-01-01
In the paper, the authors elaborate some recently published research concerning the originality of artworks in terms of self-organization in the complex systems physics. It has been demonstrated that the originality issue such conceived leads to the criterion of a substantial aesthetics whose applicability is not restricted to the fine arts domain only covering also physics, biology, cosmology and other fields construed in the complex systems terms. Moreover, it is about a truth criterion related to the traditional personality conception revealing the ontological context transcendent to the gnoseological dualism of subjective and objective reality that is characteristic of modern science and humanities. Thus, it is considered to be an aesthetical criterion substantiating art and science as well as the other developments of the postmodern era. Its impact to psychology, education, ecology, culture and other humanities is briefly indicated.
Genetic signatures of natural selection in a model invasive ascidian
Lin, Yaping; Chen, Yiyong; Yi, Changho; Fong, Jonathan J.; Kim, Won; Rius, Marc; Zhan, Aibin
2017-01-01
Invasive species represent promising models to study species’ responses to rapidly changing environments. Although local adaptation frequently occurs during contemporary range expansion, the associated genetic signatures at both population and genomic levels remain largely unknown. Here, we use genome-wide gene-associated microsatellites to investigate genetic signatures of natural selection in a model invasive ascidian, Ciona robusta. Population genetic analyses of 150 individuals sampled in Korea, New Zealand, South Africa and Spain showed significant genetic differentiation among populations. Based on outlier tests, we found high incidence of signatures of directional selection at 19 loci. Hitchhiking mapping analyses identified 12 directional selective sweep regions, and all selective sweep windows on chromosomes were narrow (~8.9 kb). Further analyses indentified 132 candidate genes under selection. When we compared our genetic data and six crucial environmental variables, 16 putatively selected loci showed significant correlation with these environmental variables. This suggests that the local environmental conditions have left significant signatures of selection at both population and genomic levels. Finally, we identified “plastic” genomic regions and genes that are promising regions to investigate evolutionary responses to rapid environmental change in C. robusta. PMID:28266616
A Simple Isolation Criterion based on 3D Redshift Space Mapping
Spector, Oded
2009-01-01
We selected a sample of galaxies, extremely isolated in 3D redshift space, based on data from NED and the ongoing ALFALFA HI (21cm) survey. A simple selection criterion was employed: having no neighbors closer than 300 km/s in 3D redshift space. The environments of galaxies, selected using this criterion and NED data alone, were analyzed theoretically using a constrained simulation of the local Universe, and were found to be an order of magnitude less dense than environments around randomly selected galaxies. One third of the galaxies selected using NED data alone did not pass the criterion when tested with ALFALFA data, implying that the use of unbiased HI data significantly improves the quality of the sample.
IT vendor selection model by using structural equation model & analytical hierarchy process
Maitra, Sarit; Dominic, P. D. D.
2012-11-01
Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.
Robust model selection and the statistical classification of languages
García, J. E.; González-López, V. A.; Viola, M. L. L.
2012-10-01
In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating
Selecting Optimal Subset of Features for Student Performance Model
Directory of Open Access Journals (Sweden)
Hany M. Harb
2012-09-01
Full Text Available Educational data mining (EDM is a new growing research area and the essence of data mining concepts are used in the educational field for the purpose of extracting useful information on the student behavior in the learning process. Classification methods like decision trees, rule mining, and Bayesian network, can be applied on the educational data for predicting the student behavior like performance in an examination. This prediction may help in student evaluation. As the feature selection influences the predictive accuracy of any performance model, it is essential to study elaborately the effectiveness of student performance model in connection with feature selection techniques. The main objective of this work is to achieve high predictive performance by adopting various feature selection techniques to increase the predictive accuracy with least number of features. The outcomes show a reduction in computational time and constructional cost in both training and classification phases of the student performance model.
Short-Run Asset Selection using a Logistic Model
Directory of Open Access Journals (Sweden)
Walter Gonçalves Junior
2011-06-01
Full Text Available Investors constantly look for significant predictors and accurate models to forecast future results, whose occasional efficacy end up being neutralized by market efficiency. Regardless, such predictors are widely used for seeking better (and more unique perceptions. This paper aims to investigate to what extent some of the most notorious indicators have discriminatory power to select stocks, and if it is feasible with such variables to build models that could anticipate those with good performance. In order to do that, logistical regressions were conducted with stocks traded at Bovespa using the selected indicators as explanatory variables. Investigated in this study were the outputs of Bovespa Index, liquidity, the Sharpe Ratio, ROE, MB, size and age evidenced to be significant predictors. Also examined were half-year, logistical models, which were adjusted in order to check the potential acceptable discriminatory power for the asset selection.
Sample selection and taste correlation in discrete choice transport modelling
DEFF Research Database (Denmark)
Mabit, Stefan Lindhard
2008-01-01
the question for a broader class of models. It is shown that the original result may be somewhat generalised. Another question investigated is whether mode choice operates as a self-selection mechanism in the estimation of the value of travel time. The results show that self-selection can at least partly...... explain counterintuitive results in value of travel time estimation. However, the results also point at the difficulty of finding suitable instruments for the selection mechanism. Taste heterogeneity is another important aspect of discrete choice modelling. Mixed logit models are designed to capture...... of taste correlation in willingness-to-pay estimation are presented. The first contribution addresses how to incorporate taste correlation in the estimation of the value of travel time for public transport. Given a limited dataset the approach taken is to use theory on the value of travel time as guidance...
Ecological criterion effect on the forest road network longitudinal gradient
Directory of Open Access Journals (Sweden)
Petr Hrůza
2013-01-01
Full Text Available The specific way in which a forest road is designed affects the management in the forest environment and timber transport. The aim of this study was to find out whether an inclusion of the ecological criterion in the forest road design will change the parameter of the longitudinal gradient of forest hauling roads and whether these changes will have an effect on the accessibility of forest stands by timber hauling machinery. The possible changes in the longitudinal gradient can also affect the technology of forest road surfacing and the selection of the appropriate surface type. We can state that an inclusion of the ecological criterion in the forest road network design will bring statistically significant changes in longitudinal gradients of forest hauling roads. The mean longitudinal gradient of the current forest road network is 2.82 % and the mean longitudinal gradient of the forest road network designed with inclusion of the ecological criterion is 4.82 %. The results show statistically significant changes in the longitudinal parameters of forest hauling roads. However, it will not bring a need for a change in construction technology, and will not affect the accessibility of forest stands by timber hauling machinery.
Fracture Criterion for Fracture Mechanics of Magnets
Institute of Scientific and Technical Information of China (English)
潘灏; 杨文涛
2003-01-01
The applicability and limitation of some fracture criteria in the fracture mechanics of magnets are studied.It is shown that the magnetic field intensity factor can be used as a fracture criterion when the crack in a magnet is only affected by a magnetic field. For some magnetostrictive materials in which the components of magnetostriction strain do not satisfy the compatibility equation of deformation, the stress intensity factor can no longer be effectively applicable as a fracture criterion when the crack in a magnet is affected by a magnetic field and mechanical loads simultaneously.
Financial applications of a Tabu search variable selection model
Directory of Open Access Journals (Sweden)
Zvi Drezner
2001-01-01
Full Text Available We illustrate how a comparatively new technique, a Tabu search variable selection model [Drezner, Marcoulides and Salhi (1999], can be applied efficiently within finance when the researcher must select a subset of variables from among the whole set of explanatory variables under consideration. Several types of problems in finance, including corporate and personal bankruptcy prediction, mortgage and credit scoring, and the selection of variables for the Arbitrage Pricing Model, require the researcher to select a subset of variables from a larger set. In order to demonstrate the usefulness of the Tabu search variable selection model, we: (1 illustrate its efficiency in comparison to the main alternative search procedures, such as stepwise regression and the Maximum R2 procedure, and (2 show how a version of the Tabu search procedure may be implemented when attempting to predict corporate bankruptcy. We accomplish (2 by indicating that a Tabu Search procedure increases the predictability of corporate bankruptcy by up to 10 percentage points in comparison to Altman's (1968 Z-Score model.
The Properties of Model Selection when Retaining Theory Variables
DEFF Research Database (Denmark)
Hendry, David F.; Johansen, Søren
Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....
Voltammetry as a Model for Teaching Chemical Instrumentation.
Gunasingham, H.; Ang, K. P.
1985-01-01
Voltammetry is used as a model for teaching chemical instrumentation to chemistry undergraduates at the National University of Singapore. Lists six criteria used to select a successful teaching model and shows how voltammetry satisfies each criterion. (JN)
Neighbourhood selection for local modelling and prediction of hydrological time series
Jayawardena, A. W.; Li, W. K.; Xu, P.
2002-02-01
The prediction of a time series using the dynamical systems approach requires the knowledge of three parameters; the time delay, the embedding dimension and the number of nearest neighbours. In this paper, a new criterion, based on the generalized degrees of freedom, for the selection of the number of nearest neighbours needed for a better local model for time series prediction is presented. The validity of the proposed method is examined using time series, which are known to be chaotic under certain initial conditions (Lorenz map, Henon map and Logistic map), and real hydro meteorological time series (discharge data from Chao Phraya river in Thailand, Mekong river in Thailand and Laos, and sea surface temperature anomaly data). The predicted results are compared with observations, and with similar predictions obtained by using arbitrarily fixed numbers of neighbours. The results indicate superior predictive capability as measured by the mean square errors and coefficients of variation by the proposed approach when compared with the traditional approach of using a fixed number of neighbours.
Duckstein, L.; Bobée, B.; Ashkar, F.
1991-09-01
The problem of fitting a probability distribution, here log-Pearson Type III distribution, to extreme floods is considered from the point of view of two numerical and three non-numerical criteria. The six techniques of fitting considered include classical techniques (maximum likelihood, moments of logarithms of flows) and new methods such as mixed moments and the generalized method of moments developed by two of the co-authors. The latter method consists of fitting the distribution using moments of different order, in particular the SAM method (Sundry Averages Method) uses the moments of order 0 (geometric mean), 1 (arithmetic mean), -1 (harmonic mean) and leads to a smaller variance of the parameters. The criteria used to select the method of parameter estimation are: - the two statistical criteria of mean square error and bias; - the two computational criteria of program availability and ease of use; - the user-related criterion of acceptability. These criteria are transformed into value functions or fuzzy set membership functions and then three Multiple Criteria Decision Modelling (MCDM) techniques, namely, composite programming, ELECTRE, and MCQA, are applied to rank the estimation techniques.
Modeling heat stress effect on Holstein cows under hot and dry conditions: selection tools.
Carabaño, M J; Bachagha, K; Ramón, M; Díaz, C
2014-12-01
component, a constant term that is not affected by temperature, representing from 64% of the variation for SCS to 91% of the variation for milk. The second component, showing a flat pattern at intermediate temperatures and increasing or decreasing slopes for the extremes, gathered 15, 11, and 24% of the variation for fat and protein yield and SCS, respectively. This component could be further evaluated as a selection criterion for heat tolerance independently of the production level.
TIME SERIES FORECASTING WITH MULTIPLE CANDIDATE MODELS: SELECTING OR COMBINING?
Institute of Scientific and Technical Information of China (English)
YU Lean; WANG Shouyang; K. K. Lai; Y.Nakamori
2005-01-01
Various mathematical models have been commonly used in time series analysis and forecasting. In these processes, academic researchers and business practitioners often come up against two important problems. One is whether to select an appropriate modeling approach for prediction purposes or to combine these different individual approaches into a single forecast for the different/dissimilar modeling approaches. Another is whether to select the best candidate model for forecasting or to mix the various candidate models with different parameters into a new forecast for the same/similar modeling approaches. In this study, we propose a set of computational procedures to solve the above two issues via two judgmental criteria. Meanwhile, in view of the problems presented in the literature, a novel modeling technique is also proposed to overcome the drawbacks of existing combined forecasting methods. To verify the efficiency and reliability of the proposed procedure and modeling technique, the simulations and real data examples are conducted in this study.The results obtained reveal that the proposed procedure and modeling technique can be used as a feasible solution for time series forecasting with multiple candidate models.
Bayesian selection of nucleotide substitution models and their site assignments.
Wu, Chieh-Hsi; Suchard, Marc A; Drummond, Alexei J
2013-03-01
Probabilistic inference of a phylogenetic tree from molecular sequence data is predicated on a substitution model describing the relative rates of change between character states along the tree for each site in the multiple sequence alignment. Commonly, one assumes that the substitution model is homogeneous across sites within large partitions of the alignment, assigns these partitions a priori, and then fixes their underlying substitution model to the best-fitting model from a hierarchy of named models. Here, we introduce an automatic model selection and model averaging approach within a Bayesian framework that simultaneously estimates the number of partitions, the assignment of sites to partitions, the substitution model for each partition, and the uncertainty in these selections. This new approach is implemented as an add-on to the BEAST 2 software platform. We find that this approach dramatically improves the fit of the nucleotide substitution model compared with existing approaches, and we show, using a number of example data sets, that as many as nine partitions are required to explain the heterogeneity in nucleotide substitution process across sites in a single gene analysis. In some instances, this improved modeling of the substitution process can have a measurable effect on downstream inference, including the estimated phylogeny, relative divergence times, and effective population size histories.
An Integrated Model For Online shopping, Using Selective Models
Directory of Open Access Journals (Sweden)
Fereshteh Rabiei Dastjerdi
Full Text Available As in traditional shopping, customer acquisition and retention are critical issues in the success of an online store. Many factors impact how, and if, customers accept online shopping. Models presented in recent years, only focus on behavioral or technolo ...
Selecting global climate models for regional climate change studies
Pierce, David W.; Barnett, Tim P.; Santer, Benjamin D.; Gleckler, Peter J.
2009-01-01
Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simula...
Spatial Fleming-Viot models with selection and mutation
Dawson, Donald A
2014-01-01
This book constructs a rigorous framework for analysing selected phenomena in evolutionary theory of populations arising due to the combined effects of migration, selection and mutation in a spatial stochastic population model, namely the evolution towards fitter and fitter types through punctuated equilibria. The discussion is based on a number of new methods, in particular multiple scale analysis, nonlinear Markov processes and their entrance laws, atomic measure-valued evolutions and new forms of duality (for state-dependent mutation and multitype selection) which are used to prove ergodic theorems in this context and are applicable for many other questions and renormalization analysis for a variety of phenomena (stasis, punctuated equilibrium, failure of naive branching approximations, biodiversity) which occur due to the combination of rare mutation, mutation, resampling, migration and selection and make it necessary to mathematically bridge the gap (in the limit) between time and space scales.
Selecting an optimal mixed products using grey relationship model
Directory of Open Access Journals (Sweden)
Farshad Faezy Razi
2013-06-01
Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.
An aerodynamic load criterion for airships
Woodward, D. E.
1975-01-01
A simple aerodynamic bending moment envelope is derived for conventionally shaped airships. This criterion is intended to be used, much like the Naval Architect's standard wave, for preliminary estimates of longitudinal strength requirements. It should be useful in tradeoff studies between speed, fineness ratio, block coefficient, structure weight, and other such general parameters of airship design.
Stability Criterion for Discrete-Time Systems
Directory of Open Access Journals (Sweden)
K. Ratchagit
2010-01-01
Full Text Available This paper is concerned with the problem of delay-dependent stability analysis for discrete-time systems with interval-like time-varying delays. The problem is solved by applying a novel Lyapunov functional, and an improved delay-dependent stability criterion is obtained in terms of a linear matrix inequality.
A Criterion-Referenced Test for Archery.
Shifflett, Bethany; Schuman, Barbara J.
1982-01-01
A criterion-referenced test for a beginning archery class was developed and evaluated. Techniques for estimating test validity and reliability were applied to data. A method developed by R. A. Burk (1976) was used to establish a cutoff score that would distinguish between those mastering the class and nonmasters. (Authors/PP)
Luoyang Dual Spatial Criterion Ecological City Construction
Institute of Scientific and Technical Information of China (English)
Wang Fazeng; Wang Shengnan
2007-01-01
The construction of an ecological city has two foundational platforms:the small platform,namely urban district or simply called as"city ecosystem";and the big platform,namely around city district in certain region scope or also referred to as"city-region ecosystem".The construction of an ecological city must be launched in the dual spatial criteria:in city(urban district)criterion-optimizing the city ecosystem;in city-region(city territory)criterion-optimizing the city-region ecosystem.Luoyang has the bright characteristic and the typical image within cities of China,and even in the world.The construction of anecological city in dual spatial criteria-the city and the city-region-has the vital significance to urbanization advancement and sustainable development in Luoyang.In city-region criterion,the primary mission of Luoyang's ecological city construction is to create a fine ecological environment platform in its city territory.In city criterion,the basic duty of Luoyang's ecologic city construction is to enhance the ecological capacity and benefit of the central city.
A Criterion for the Generalized Riemann Hypothesis
Institute of Scientific and Technical Information of China (English)
Jin Hong LI
2009-01-01
In this paper, we study the automorphic L-functions attached to the classical automorphic forms on GL(2), i.e. holomorphic cusp form. And we also give a criterion for the Generalized Riemann Hypothesis (GRH) for the above L-functions.
Identical Synchronous Criterion for a Coupling System
Institute of Scientific and Technical Information of China (English)
HUANGXiangao; ANOWei; LUOXinmin; ZHUFuchen
2004-01-01
A new identical synchronous criterion of a coupling system, which is the time average of the derivative of the Lyapunov function, is proposed to determine the synchronous occurrence of any coupling system. Three examples with linear or nonlinear feedback synchronous systems are introduced to test some synchronous parameters that are the conditional Lyapunov exponents, the time average of the derivative of the Lyapunov function,the mean square error of the synchronization. Having obtained the synchronous parameters with the change of the feedback gains, we discover that Pecora and Carroll's criterion and He and Vaidya's reduced criterion are only fit to determine the synchronization of the identical selfsynchronization system which is a special example in the coupling systems, and are not taken as the general identical synchronous criterion of any coupling system. However,no matter whether the largest conditional Lyapunov exponent or the derivative of the Lyapunov function is positive or negative, synchronization of the coupling systems will occur,as long as the average change ratio of the derivative of the Lyapunov function tends to zero.
Optimization of the Structures at Shakedown and Rosen's Optimality Criterion
Alawdin, Piotr; Atkociunas, Juozas; Liepa, Liudas
2016-09-01
Paper focuses on the problems of application of extreme energy principles and nonlinear mathematical programing in the theory of structural shakedown. By means of energy principles, which describes the true stress-strain state conditions of the structure, the dual mathematical models of analysis problems are formed (static and kinematic formulations). It is shown how common mathematical model of the structures optimization at shakedown with safety and serviceability constraints (according to the ultimate limit state (ULS) and serviceability limit state (SLS) requirements) on the basis of previously mentioned mathematical models is formed. The possibilities of optimization problem solution in the context of physical interpretation of optimality criterion of Rosen's algorithm are analyzed.
A topic evolution model with sentiment and selective attention
Si, Xia-Meng; Wang, Wen-Dong; Zhai, Chun-Qing; Ma, Yan
2017-04-01
Topic evolution is a hybrid dynamics of information propagation and opinion interaction. The dynamics of opinion interaction is inherently interwoven with the dynamics of information propagation in the network, owing to the bidirectional influences between interaction and diffusion. The degree of sentiment determines if the topic can continue to spread from this node, and the selective attention determines the information flow direction and communicatee selection. For this end, we put forward a sentiment-based mixed dynamics model with selective attention, and applied the Bayesian updating rules on it. Our model can indirectly describe the isolated users who seem isolated from a topic due to some reasons even everybody around them has heard about it. Numerical simulations show that, more insiders initially and fewer simultaneous spreaders can lessen the extremism. To promote the topic diffusion or restrain the prevailing of extremism, fewer agents with constructive motivation and more agents with no involving motivation are encouraged.
Evidence accumulation as a model for lexical selection.
Anders, R; Riès, S; van Maanen, L; Alario, F X
2015-11-01
We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.
Second-order model selection in mixture experiments
Energy Technology Data Exchange (ETDEWEB)
Redgate, P.E.; Piepel, G.F.; Hrma, P.R.
1992-07-01
Full second-order models for q-component mixture experiments contain q(q+l)/2 terms, which increases rapidly as q increases. Fitting full second-order models for larger q may involve problems with ill-conditioning and overfitting. These problems can be remedied by transforming the mixture components and/or fitting reduced forms of the full second-order mixture model. Various component transformation and model reduction approaches are discussed. Data from a 10-component nuclear waste glass study are used to illustrate ill-conditioning and overfitting problems that can be encountered when fitting a full second-order mixture model. Component transformation, model term selection, and model evaluation/validation techniques are discussed and illustrated for the waste glass example.
Measuring balance and model selection in propensity score methods
Belitser, S.; Martens, Edwin P.; Pestman, Wiebe R.; Groenwold, Rolf H.H.; De Boer, Anthonius; Klungel, Olaf H.
2011-01-01
Background: Propensity score (PS) methods focus on balancing confounders between groups to estimate an unbiased treatment or exposure effect. However, there is lack of attention in actually measuring, reporting and using the information on balance, for instance for model selection. Objectives: To de
Selecting crop models for decision making in wheat insurance
Castaneda Vera, A.; Leffelaar, P.A.; Alvaro-Fuentes, J.; Cantero-Martinez, C.; Minguez, M.I.
2015-01-01
In crop insurance, the accuracy with which the insurer quantifies the actual risk is highly dependent on the availability on actual yield data. Crop models might be valuable tools to generate data on expected yields for risk assessment when no historical records are available. However, selecting a c
Cross-validation criteria for SETAR model selection
de Gooijer, J.G.
2001-01-01
Three cross-validation criteria, denoted C, C_c, and C_u are proposed for selecting the orders of a self-exciting threshold autoregressive SETAR) model when both the delay and the threshold value are unknown. The derivatioon of C is within a natural cross-validation framework. The crietion C_c is si
Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2011-01-01
’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...
Selecting crop models for decision making in wheat insurance
Castaneda Vera, A.; Leffelaar, P.A.; Alvaro-Fuentes, J.; Cantero-Martinez, C.; Minguez, M.I.
2015-01-01
In crop insurance, the accuracy with which the insurer quantifies the actual risk is highly dependent on the availability on actual yield data. Crop models might be valuable tools to generate data on expected yields for risk assessment when no historical records are available. However, selecting a
Efficiency of event-based sampling according to error energy criterion.
Miskowicz, Marek
2010-01-01
The paper belongs to the studies that deal with the effectiveness of the particular event-based sampling scheme compared to the conventional periodic sampling as a reference. In the present study, the event-based sampling according to a constant energy of sampling error is analyzed. This criterion is suitable for applications where the energy of sampling error should be bounded (i.e., in building automation, or in greenhouse climate monitoring and control). Compared to the integral sampling criteria, the error energy criterion gives more weight to extreme sampling error values. The proposed sampling principle extends a range of event-based sampling schemes and makes the choice of particular sampling criterion more flexible to application requirements. In the paper, it is proved analytically that the proposed event-based sampling criterion is more effective than the periodic sampling by a factor defined by the ratio of the maximum to the mean of the cubic root of the signal time-derivative square in the analyzed time interval. Furthermore, it is shown that the sampling according to energy criterion is less effective than the send-on-delta scheme but more effective than the sampling according to integral criterion. On the other hand, it is indicated that higher effectiveness in sampling according to the selected event-based criterion is obtained at the cost of increasing the total sampling error defined as the sum of errors for all the samples taken.
Model selection for robust Bayesian mixture distributions%鲁棒贝叶斯混合分布的模型选择
Institute of Scientific and Technical Information of China (English)
卿湘运; 王行愚
2009-01-01
Bayesian approaches to robust mixture modelling based on Student-r distributions enable to be less sensitive to outliers, thereby preventing from over-estimating of the number of mixting components. However, there are two intractable problems in the previous methods for model selection under the variational Bayesian framework: (1) The variational approach converges to a local maximum of the low bound on the log-evidence that dependents on the initial parameter values. How can the variational approach guarantee that the initial settings for different models are consistency? (2) The low bound is sensitive to factorized approximation forms in the inference process. How can the variational approach guarantee that the approximate errors for different models are equivalent? In this paper, we present a model selection algorithm for robust bayesian mixture distributions based on deviance information criterion (DIC) proposed by Spiegelhalter et al. In 2002. Unlike the Bayesian Infromation Criterion (BIC), the DIC is straightforward in calculation, which has been adopted in many modern applications. Inspired by the works of McGrory et al. , which used the DIC values for model selection tasks of finite mixture Gaussian distributions and hidden Markov models, the calculation of a DIC for robust Bayesian mixture model is derived. The proposed algorithm can learn model parameters and perform model selection simultaneously, which avoids choosing an optimum one among a large set of candidate models. A method to initialize parameters of the algorithm is provided. Experimental results on simulated data and Old Faithful Geyser data containing a large amount of outliers show the good performance that the algorithm can learn parameters of mixture components robustly and the number of components precisely.%提出一种基于偏差信息准则(deriance information criterion,DIC)的鲁棒贝叶斯混合分布模型选择算法.在变分逼近框架下,给出鲁棒贝叶斯混合模型
Selective refinement and selection of near-native models in protein structure prediction.
Zhang, Jiong; Barz, Bogdan; Zhang, Jingfen; Xu, Dong; Kosztin, Ioan
2015-10-01
In recent years in silico protein structure prediction reached a level where fully automated servers can generate large pools of near-native structures. However, the identification and further refinement of the best structures from the pool of models remain problematic. To address these issues, we have developed (i) a target-specific selective refinement (SR) protocol; and (ii) molecular dynamics (MD) simulation based ranking (SMDR) method. In SR the all-atom refinement of structures is accomplished via the Rosetta Relax protocol, subject to specific constraints determined by the size and complexity of the target. The best-refined models are selected with SMDR by testing their relative stability against gradual heating through all-atom MD simulations. Through extensive testing we have found that Mufold-MD, our fully automated protein structure prediction server updated with the SR and SMDR modules consistently outperformed its previous versions.
A model selection approach to analysis of variance and covariance.
Alber, Susan A; Weiss, Robert E
2009-06-15
An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures. Copyright (c) 2009 John Wiley & Sons, Ltd.
Statistical modelling in biostatistics and bioinformatics selected papers
Peng, Defen
2014-01-01
This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...
Institute of Scientific and Technical Information of China (English)
徐兵; 贾艳丽
2013-01-01
为了刻画零售商风险厌恶程度对供应链订货决策和收益的影响,分析次品返修再销售与回收再制造的效果,运用博弈论和条件风险值(CVaR)准则,对随机需求下由风险中性生产商和风险厌恶零售商组成的闭环供应链,构建了分散式决策模型和集中式决策模型；基于合同理论,提出了协调闭环供应链的利润共享合同.研究结果表明:分散式决策订货量小于集中式决策订货量；零售商为风险厌恶型的两类决策模型是其为风险中性时对应模型的拓展,而经典报童模型是本文分散式决策模型的特例；利润共享合同可实现闭环供应链的协调和生产商与零售商双赢.零售商越厌恶风险,其订货量越低；产品合格率越低,订货量越多；这2种情况下生产商、零售商和供应链的收益都将下降.%The paper proposes the decentralized decision and centralized decision models of a closedloop supply chain (CLSC) with stochastic demand,consisting of one risk-neutral manufacturer and one risk-averse retailer,with the use of the game theory and the conditional value-at-risk (CVaR)criterion.The purpose is to characterize the impact of the risk-averse degree of retailer on the orderquantity decision and the profit of supply chain,and analyze the effect of inferior-products' repairing and remanufacturing.Based on the contract theory,a profit sharing contract was put forward to coordinate the CLSC.The study reveals that the decentralized order-quantity is lower than the centralized order-quantity ; the two decision models of the CLSC with the risk-averse retailer generalize the corresponding models of the CLSC with the risk-neutral retailer,and the classic newsvendor model is a special case of the decentralized models; the profit-sharing contract can coordinate the CLSC and generate a win-win situation between the manufacturer and the retailer.Sensitivity analysis shows that the retailer being more risk
PROPOSAL OF AN EMPIRICAL MODEL FOR SUPPLIERS SELECTION
Directory of Open Access Journals (Sweden)
Paulo Ávila
2015-03-01
Full Text Available The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, trough the literature review, there were identified five broad suppliers selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. Thereafter, a survey was elaborated and companies were contacted in order to answer which factors have more relevance in their decisions to choose the suppliers. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP method or Simple Multi-Attribute Rating Technique (SMART. The result of the research undertaken by the authors is a reference model that represents a decision making support for the suppliers/partners selection process.
Supplier Selection in Virtual Enterprise Model of Manufacturing Supply Network
Kaihara, Toshiya; Opadiji, Jayeola F.
The market-based approach to manufacturing supply network planning focuses on the competitive attitudes of various enterprises in the network to generate plans that seek to maximize the throughput of the network. It is this competitive behaviour of the member units that we explore in proposing a solution model for a supplier selection problem in convergent manufacturing supply networks. We present a formulation of autonomous units of the network as trading agents in a virtual enterprise network interacting to deliver value to market consumers and discuss the effect of internal and external trading parameters on the selection of suppliers by enterprise units.
A model-based approach to selection of tag SNPs
Directory of Open Access Journals (Sweden)
Sun Fengzhu
2006-06-01
Full Text Available Abstract Background Single Nucleotide Polymorphisms (SNPs are the most common type of polymorphisms found in the human genome. Effective genetic association studies require the identification of sets of tag SNPs that capture as much haplotype information as possible. Tag SNP selection is analogous to the problem of data compression in information theory. According to Shannon's framework, the optimal tag set maximizes the entropy of the tag SNPs subject to constraints on the number of SNPs. This approach requires an appropriate probabilistic model. Compared to simple measures of Linkage Disequilibrium (LD, a good model of haplotype sequences can more accurately account for LD structure. It also provides a machinery for the prediction of tagged SNPs and thereby to assess the performances of tag sets through their ability to predict larger SNP sets. Results Here, we compute the description code-lengths of SNP data for an array of models and we develop tag SNP selection methods based on these models and the strategy of entropy maximization. Using data sets from the HapMap and ENCODE projects, we show that the hidden Markov model introduced by Li and Stephens outperforms the other models in several aspects: description code-length of SNP data, information content of tag sets, and prediction of tagged SNPs. This is the first use of this model in the context of tag SNP selection. Conclusion Our study provides strong evidence that the tag sets selected by our best method, based on Li and Stephens model, outperform those chosen by several existing methods. The results also suggest that information content evaluated with a good model is more sensitive for assessing the quality of a tagging set than the correct prediction rate of tagged SNPs. Besides, we show that haplotype phase uncertainty has an almost negligible impact on the ability of good tag sets to predict tagged SNPs. This justifies the selection of tag SNPs on the basis of haplotype
A Dynamic Stability Criterion for Ice Shelves and Tidewater Glaciers
Bassis, J. N.; Fricker, H. A.; Minster, J.
2006-12-01
The collapse of the Antarctic ice shelves could have dramatic consequences for the mass balance of the Antarctic ice sheet and, as a result, sea level rise. It is therefore imperative to improve our knowledge of the mechanisms that lead to ice shelf retreat. The mechanism that has the potential to remove the largest amounts of mass rapidly is iceberg calving. However, the processes and mechanisms that lead to iceberg calving are still poorly understood. Motivated by the complexity of the short-time scale behavior of ice fracture we seek a dynamic stability criterion that predicts the onset of ice shelf retreat based on dimensional analysis. In our approach, rather than attempt to model the initiation and propagation of individual fractures, we look for a non-dimensional number that describes the overall ice shelf stability. We also make the assumption that the same criterion, valid for ice shelves, also applies to tidewater glaciers. This enables us to test our criterion against a larger set of ice shelves and calving glaciers. Our analysis predicts that retreat will occur when a non-dimensional number that we call the "terminus stability number", decreases below a critical value. We show that this criterion is valid for calving glaciers in Alaska, for several glaciers around Greenland as well as for three Antarctic ice shelves. This stability analysis has much in common with classic hydrodynamic stability theory, where the onset of instability is related to non-dimensional numbers that are largely independent of geometry or other situation specific variables.
Wang, Cong; Shang, De-Guang; Wang, Xiao-Wei
2015-02-01
An improved high-cycle multiaxial fatigue criterion based on the critical plane was proposed in this paper. The critical plane was defined as the plane of maximum shear stress (MSS) in the proposed multiaxial fatigue criterion, which is different from the traditional critical plane based on the MSS amplitude. The proposed criterion was extended as a fatigue life prediction model that can be applicable for ductile and brittle materials. The fatigue life prediction model based on the proposed high-cycle multiaxial fatigue criterion was validated with experimental results obtained from the test of 7075-T651 aluminum alloy and some references.
Institute of Scientific and Technical Information of China (English)
朱建明; 程海峰; 姚仰平
2013-01-01
岩石材料是一种非均质材料，而破裂岩是指岩体内部含有大量的裂隙、空洞、界面等缺陷，在荷载作用下其微元体破坏更具有随机性。在损伤理论的基础上，从微元体的强度随机分布的角度出发，在微元体强度度量方法上考虑损伤阈值的影响，假设破裂岩的微元体强度服从Weibull分布，结合考虑中主应力的SMP准则，建立破裂岩统计损伤软化本构模型，运用多种计算方法对模型参数m和F0进行确定。通过对小官庄铁矿2种破裂岩闪长玢岩以及矽卡岩的验证表明：在不同围压下，全应力-应变试验曲线与预测曲线吻合良好，能够反映损伤阈值的影响，且在较低围压下更为理想。同时，此模型能够反映随着围压的增加岩石的峰值强度增加而延性增大的性质，进一步验证模型的适用性及较强的应用价值。%The rock material is a kind of non-homogeneous material and there are a large number of defects such as fissures,voids,interfaces in the broken rock. Its micro units damage more randomness under the loads. Starting from micro units strength are random distribution on the basis of damage theory,a method for measuring microcosmic element strength of rock is presented with consideration of damage threshold. Assuming the micro units strength of broken rock obey Weibull distribution,combined with the SMP criterion,which considers the intermediate principal stress,the statistical damage softening constitutive model is built. And the model parameters m and F0 are computed using some methods. Through the calculation results of two kinds of broken rock in Xiaoguanzhuang indicate that:the predicted stress-strain curves under different confining pressures agree well with the test data,and it can reflect not only the influence of damage threshold but also more desirable in the lower confining pressures. At the same time,this model can reflect the characteristic that the
Models of cultural niche construction with selection and assortative mating.
Creanza, Nicole; Fogarty, Laurel; Feldman, Marcus W
2012-01-01
Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.
Models of cultural niche construction with selection and assortative mating.
Directory of Open Access Journals (Sweden)
Nicole Creanza
Full Text Available Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.
Bayesian nonparametric centered random effects models with variable selection.
Yang, Mingan
2013-03-01
In a linear mixed effects model, it is common practice to assume that the random effects follow a parametric distribution such as a normal distribution with mean zero. However, in the case of variable selection, substantial violation of the normality assumption can potentially impact the subset selection and result in poor interpretation and even incorrect results. In nonparametric random effects models, the random effects generally have a nonzero mean, which causes an identifiability problem for the fixed effects that are paired with the random effects. In this article, we focus on a Bayesian method for variable selection. We characterize the subject-specific random effects nonparametrically with a Dirichlet process and resolve the bias simultaneously. In particular, we propose flexible modeling of the conditional distribution of the random effects with changes across the predictor space. The approach is implemented using a stochastic search Gibbs sampler to identify subsets of fixed effects and random effects to be included in the model. Simulations are provided to evaluate and compare the performance of our approach to the existing ones. We then apply the new approach to a real data example, cross-country and interlaboratory rodent uterotrophic bioassay.
QOS Aware Formalized Model for Semantic Web Service Selection
Directory of Open Access Journals (Sweden)
Divya Sachan
2014-10-01
Full Text Available Selecting the most relevant Web Service according to a client requirement is an onerous task, as innumerous number of functionally same Web Services(WS are listed in UDDI registry. WS are functionally same but their Quality and performance varies as per service providers. A web Service Selection Process involves two major points: Recommending the pertinent Web Service and avoiding unjustifiable web service. The deficiency in keyword based searching is that it doesn’t handle the client request accurately as keyword may have ambiguous meaning on different scenarios. UDDI and search engines all are based on keyword search, which are lagging behind on pertinent Web service selection. So the search mechanism must be incorporated with the Semantic behavior of Web Services. In order to strengthen this approach, the proposed model is incorporated with Quality of Services (QoS based Ranking of semantic web services.
Modelling autophagy selectivity by receptor clustering on peroxisomes
Brown, Aidan I
2016-01-01
When subcellular organelles are degraded by autophagy, typically some, but not all, of each targeted organelle type are degraded. Autophagy selectivity must not only select the correct type of organelle, but must discriminate between individual organelles of the same kind. In the context of peroxisomes, we use computational models to explore the hypothesis that physical clustering of autophagy receptor proteins on the surface of each organelle provides an appropriate all-or-none signal for degradation. The pexophagy receptor proteins NBR1 and p62 are well characterized, though only NBR1 is essential for pexophagy (Deosaran {\\em et al.}, 2013). Extending earlier work by addressing the initial nucleation of NBR1 clusters on individual peroxisomes, we find that larger peroxisomes nucleate NBR1 clusters first and lose them due to competitive coarsening last, resulting in significant size-selectivity favouring large peroxisomes. This effect can explain the increased catalase signal that results from experimental s...
Numerical Model based Reliability Estimation of Selective Laser Melting Process
DEFF Research Database (Denmark)
Mohanty, Sankhya; Hattel, Jesper Henri
2014-01-01
Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....
Directory of Open Access Journals (Sweden)
Henry de-Graft Acquah
2013-01-01
Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.
Exploratory Bayesian model selection for serial genetics data.
Zhao, Jing X; Foulkes, Andrea S; George, Edward I
2005-06-01
Characterizing the process by which molecular and cellular level changes occur over time will have broad implications for clinical decision making and help further our knowledge of disease etiology across many complex diseases. However, this presents an analytic challenge due to the large number of potentially relevant biomarkers and the complex, uncharacterized relationships among them. We propose an exploratory Bayesian model selection procedure that searches for model simplicity through independence testing of multiple discrete biomarkers measured over time. Bayes factor calculations are used to identify and compare models that are best supported by the data. For large model spaces, i.e., a large number of multi-leveled biomarkers, we propose a Markov chain Monte Carlo (MCMC) stochastic search algorithm for finding promising models. We apply our procedure to explore the extent to which HIV-1 genetic changes occur independently over time.
Stationary solutions for metapopulation Moran models with mutation and selection
Constable, George W. A.; McKane, Alan J.
2015-03-01
We construct an individual-based metapopulation model of population genetics featuring migration, mutation, selection, and genetic drift. In the case of a single "island," the model reduces to the Moran model. Using the diffusion approximation and time-scale separation arguments, an effective one-variable description of the model is developed. The effective description bears similarities to the well-mixed Moran model with effective parameters that depend on the network structure and island sizes, and it is amenable to analysis. Predictions from the reduced theory match the results from stochastic simulations across a range of parameters. The nature of the fast-variable elimination technique we adopt is further studied by applying it to a linear system, where it provides a precise description of the slow dynamics in the limit of large time-scale separation.
Predicting artificailly drained areas by means of selective model ensemble
DEFF Research Database (Denmark)
Møller, Anders Bjørn; Beucher, Amélie; Iversen, Bo Vangsø
. The approaches employed include decision trees, discriminant analysis, regression models, neural networks and support vector machines amongst others. Several models are trained with each method, using variously the original soil covariates and principal components of the covariates. With a large ensemble...... out since the mid-19th century, and it has been estimated that half of the cultivated area is artificially drained (Olesen, 2009). A number of machine learning approaches can be used to predict artificially drained areas in geographic space. However, instead of choosing the most accurate model....... The study aims firstly to train a large number of models to predict the extent of artificially drained areas using various machine learning approaches. Secondly, the study will develop a method for selecting the models, which give a good prediction of artificially drained areas, when used in conjunction...
Model Selection Framework for Graph-based data
Caceres, Rajmonda S; Schmidt, Matthew C; Miller, Benjamin A; Campbell, William M
2016-01-01
Graphs are powerful abstractions for capturing complex relationships in diverse application settings. An active area of research focuses on theoretical models that define the generative mechanism of a graph. Yet given the complexity and inherent noise in real datasets, it is still very challenging to identify the best model for a given observed graph. We discuss a framework for graph model selection that leverages a long list of graph topological properties and a random forest classifier to learn and classify different graph instances. We fully characterize the discriminative power of our approach as we sweep through the parameter space of two generative models, the Erdos-Renyi and the stochastic block model. We show that our approach gets very close to known theoretical bounds and we provide insight on which topological features play a critical discriminating role.
Feature selection and survival modeling in The Cancer Genome Atlas
Directory of Open Access Journals (Sweden)
Kim H
2013-09-01
Full Text Available Hyunsoo Kim,1 Markus Bredel2 1Department of Pathology, The University of Alabama at Birmingham, Birmingham, AL, USA; 2Department of Radiation Oncology, and Comprehensive Cancer Center, The University of Alabama at Birmingham, Birmingham, AL, USA Purpose: Personalized medicine is predicated on the concept of identifying subgroups of a common disease for better treatment. Identifying biomarkers that predict disease subtypes has been a major focus of biomedical science. In the era of genome-wide profiling, there is controversy as to the optimal number of genes as an input of a feature selection algorithm for survival modeling. Patients and methods: The expression profiles and outcomes of 544 patients were retrieved from The Cancer Genome Atlas. We compared four different survival prediction methods: (1 1-nearest neighbor (1-NN survival prediction method; (2 random patient selection method and a Cox-based regression method with nested cross-validation; (3 least absolute shrinkage and selection operator (LASSO optimization using whole-genome gene expression profiles; or (4 gene expression profiles of cancer pathway genes. Results: The 1-NN method performed better than the random patient selection method in terms of survival predictions, although it does not include a feature selection step. The Cox-based regression method with LASSO optimization using whole-genome gene expression data demonstrated higher survival prediction power than the 1-NN method, but was outperformed by the same method when using gene expression profiles of cancer pathway genes alone. Conclusion: The 1-NN survival prediction method may require more patients for better performance, even when omitting censored data. Using preexisting biological knowledge for survival prediction is reasonable as a means to understand the biological system of a cancer, unless the analysis goal is to identify completely unknown genes relevant to cancer biology. Keywords: brain, feature selection
Information criterion based fast PCA adaptive algorithm
Institute of Scientific and Technical Information of China (English)
Li Jiawen; Li Congxin
2007-01-01
The novel information criterion (NIC) algorithm can find the principal subspace quickly, but it is not an actual principal component analysis (PCA) algorithm and hence it cannot find the orthonormal eigen-space which corresponds to the principal component of input vector.This defect limits its application in practice.By weighting the neural network's output of NIC, a modified novel information criterion (MNIC) algorithm is presented.MNIC extractes the principal components and corresponding eigenvectors in a parallel online learning program, and overcomes the NIC's defect.It is proved to have a single global optimum and nonquadratic convergence rate, which is superior to the conventional PCA online algorithms such as Oja and LMSER.The relationship among Oja, LMSER and MNIC is exhibited.Simulations show that MNIC could converge to the optimum fast.The validity of MNIC is proved.
Sampling Criterion for EMC Near Field Measurements
DEFF Research Database (Denmark)
Franek, Ondrej; Sørensen, Morten; Ebert, Hans;
2012-01-01
An alternative, quasi-empirical sampling criterion for EMC near field measurements intended for close coupling investigations is proposed. The criterion is based on maximum error caused by sub-optimal sampling of near fields in the vicinity of an elementary dipole, which is suggested as a worst......-case representative of a signal trace on a typical printed circuit board. It has been found that the sampling density derived in this way is in fact very similar to that given by the antenna near field sampling theorem, if an error less than 1 dB is required. The principal advantage of the proposed formulation is its...... parametrization with respect to the desired maximum error in measurements. This allows the engineer performing the near field scan to choose a suitable compromise between accuracy and measurement time....
Directory of Open Access Journals (Sweden)
Majid Mousanejad
2016-03-01
Full Text Available In this study feasibility study and implementation of electronic commerce at export businesses at Mehran border by the end of implementing one of the electronic commerce and immigration from traditional to electronic manner was done.To achieve this end some questionnaires were distributed and collected among export business possessors at Mehranborder .according to obtained information of these questionnaires three main index of each criterion were identified and were investigated by elites group in order to use them in multi criterion decision making method .In this study to implement multi criterion decision making method two methods of AHP and Topsis were used ,first these two were simulated at MATLAB programming environment ,then decision making process will be done .After feasibility study and implementing electronic commerce at Mehran border , finally the business to business method was selected as the best model for implementation that a program by the title of ʺ Mehran border export terminal " based on android system by the capability of online updating and orders’ recording were written experimentally for four export businesses and finally by assessment forms and distribution among research stakeholders was investigated that obtained results imply the positive outcome of the process.
Optimization of laminates subjected to failure criterion
Directory of Open Access Journals (Sweden)
E. Kormaníková
2011-01-01
Full Text Available The paper is aimed on laminate optimization subjected to maximum strain criterion. The optimization problem is based on the use of continuous design variables. The thicknesses of layers with the known orientation are used as design variables. The optimization problem with strain constraints are formulated to minimize the laminate weight. The design of the final thickness is rounded off to integer multiples of the commercially available layer thickness.
Ensemble feature selection integrating elitist roles and quantum game model
Institute of Scientific and Technical Information of China (English)
Weiping Ding; Jiandong Wang; Zhijin Guan; Quan Shi
2015-01-01
To accelerate the selection process of feature subsets in the rough set theory (RST), an ensemble elitist roles based quantum game (EERQG) algorithm is proposed for feature selec-tion. Firstly, the multilevel elitist roles based dynamics equilibrium strategy is established, and both immigration and emigration of elitists are able to be self-adaptive to balance between exploration and exploitation for feature selection. Secondly, the utility matrix of trust margins is introduced to the model of multilevel elitist roles to enhance various elitist roles’ performance of searching the optimal feature subsets, and the win-win utility solutions for feature selec-tion can be attained. Meanwhile, a novel ensemble quantum game strategy is designed as an intriguing exhibiting structure to perfect the dynamics equilibrium of multilevel elitist roles. Final y, the en-semble manner of multilevel elitist roles is employed to achieve the global minimal feature subset, which wil greatly improve the fea-sibility and effectiveness. Experiment results show the proposed EERQG algorithm has superiority compared to the existing feature selection algorithms.
Transitions in a genotype selection model driven by coloured noises
Institute of Scientific and Technical Information of China (English)
Wang Can-Jun; Mei Dong-Cheng
2008-01-01
This paper investigates a genotype selection model subjected to both a multiplicative coloured noise and an additive coloured noise with different correlation time T1 and T2 by means of the numerical technique.By directly simulating the Langevin Equation,the following results are obtained.(1) The multiplicative coloured noise dominates,however,the effect of the additive coloured noise is not neglected in the practical gene selection process.The selection rate μ decides that the selection is propitious to gene A haploid or gene B haploid.(2) The additive coloured noise intensity α and the correlation time T2 play opposite roles.It is noted that α and T2 can not separate the single peak,while αcan make the peak disappear and T2 can make the peak be sharp.(3) The multiplicative coloured noise intensity D and the correlation time T1 can induce phase transition,at the same time they play opposite roles and the reentrance phenomenon appears.In this case,it is easy to select one type haploid from the group with increasing D and decreasing T1.
Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection
DEFF Research Database (Denmark)
Bork, Lasse; Møller, Stig Vinther
2015-01-01
We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves...
Selection between Linear Factor Models and Latent Profile Models Using Conditional Covariances
Halpin, Peter F.; Maraun, Michael D.
2010-01-01
A method for selecting between K-dimensional linear factor models and (K + 1)-class latent profile models is proposed. In particular, it is shown that the conditional covariances of observed variables are constant under factor models but nonlinear functions of the conditioning variable under latent profile models. The performance of a convenient…
Selection between Linear Factor Models and Latent Profile Models Using Conditional Covariances
Halpin, Peter F.; Maraun, Michael D.
2010-01-01
A method for selecting between K-dimensional linear factor models and (K + 1)-class latent profile models is proposed. In particular, it is shown that the conditional covariances of observed variables are constant under factor models but nonlinear functions of the conditioning variable under latent profile models. The performance of a convenient…
Age Effects in Adaptive Criterion Learning.
Cassidy, Brittany S; Gutchess, Angela H
2016-11-01
Although prior work has examined age-related changes to criterion placement and flexibility, no study tested these constructs through a paradigm that employs adaptive feedback to encourage specific criterion changes. The goal of this study was to assess age differences in how young and older adults adapt and shift criteria in recognition memory decisions based on trial-by-trial feedback. Young and older adults completed an adaptive criterion learning paradigm. Over 3 study/test cycles, a biased feedback technique at test encouraged more liberal or strict responding by false-positive feedback toward false alarms or misses. Older adults were more conservative than young, even when feedback first encouraged a liberal response bias, and older adults adaptively placed criteria in response to biased feedback, much like young adults. After first being encouraged to respond conservatively, older adults shifted criteria less than young when feedback encouraged more lenient responding. These findings evidence labile adaptive criteria placement and criteria shifting with age. However, age-related tendencies toward conservative response biases may limit the extent to which criteria can be shifted in a lenient direction. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Social influences on adaptive criterion learning.
Cassidy, Brittany S; Dubé, Chad; Gutchess, Angela H
2015-07-01
People adaptively shift decision criteria when given biased feedback encouraging specific types of errors. Given that work on this topic has been conducted in nonsocial contexts, we extended the literature by examining adaptive criterion learning in both social and nonsocial contexts. Specifically, we compared potential differences in criterion shifting given performance feedback from social sources varying in reliability and from a nonsocial source. Participants became lax when given false positive feedback for false alarms, and became conservative when given false positive feedback for misses, replicating prior work. In terms of a social influence on adaptive criterion learning, people became more lax in response style over time if feedback was provided by a nonsocial source or by a social source meant to be perceived as unreliable and low-achieving. In contrast, people adopted a more conservative response style over time if performance feedback came from a high-achieving and reliable source. Awareness that a reliable and high-achieving person had not provided their feedback reduced the tendency to become more conservative, relative to those unaware of the source manipulation. Because teaching and learning often occur in a social context, these findings may have important implications for many scenarios in which people fine-tune their behaviors, given cues from others.
On the hodological criterion for homology.
Faunes, Macarena; Francisco Botelho, João; Ahumada Galleguillos, Patricio; Mpodozis, Jorge
2015-01-01
Owen's pre-evolutionary definition of a homolog as "the same organ in different animals under every variety of form and function" and its redefinition after Darwin as "the same trait in different lineages due to common ancestry" entail the same heuristic problem: how to establish "sameness."Although different criteria for homology often conflict, there is currently a generalized acceptance of gene expression as the best criterion. This gene-centered view of homology results from a reductionist and preformationist concept of living beings. Here, we adopt an alternative organismic-epigenetic viewpoint, and conceive living beings as systems whose identity is given by the dynamic interactions between their components at their multiple levels of composition. We posit that there cannot be an absolute homology criterion, and instead, homology should be inferred from comparisons at the levels and developmental stages where the delimitation of the compared trait lies. In this line, we argue that neural connectivity, i.e., the hodological criterion, should prevail in the determination of homologies between brain supra-cellular structures, such as the vertebrate pallium.
On the hodological criterion for homology
Directory of Open Access Journals (Sweden)
Macarena eFaunes
2015-06-01
Full Text Available Owen’s pre-evolutionary definition of a homologue as the same organ in different animals under every variety of form and function and its redefinition after Darwin as the same trait in different lineages due to common ancestry entail the same heuristic problem: how to establish sameness. Although different criteria for homology often conflict, there is currently a generalized acceptance of gene expression as the best criterion. This gene-centered view of homology results from a reductionist and preformationist concept of living beings. Here, we adopt an alternative organismic-epigenetic viewpoint, and conceive living beings as systems whose identity is given by the dynamic interactions between their components at their multiple levels of composition. We posit that there cannot be an absolute homology criterion, and instead, homology should be inferred from comparisons at the levels and developmental stages where the delimitation of the compared trait lies. In this line, we argue that neural connectivity, i.e., the hodological criterion, should prevail in the determination of homologies between brain supra-cellular structures, such as the vertebrate pallium.
On the hodological criterion for homology
Faunes, Macarena; Francisco Botelho, João; Ahumada Galleguillos, Patricio; Mpodozis, Jorge
2015-01-01
Owen's pre-evolutionary definition of a homolog as “the same organ in different animals under every variety of form and function” and its redefinition after Darwin as “the same trait in different lineages due to common ancestry” entail the same heuristic problem: how to establish “sameness.”Although different criteria for homology often conflict, there is currently a generalized acceptance of gene expression as the best criterion. This gene-centered view of homology results from a reductionist and preformationist concept of living beings. Here, we adopt an alternative organismic-epigenetic viewpoint, and conceive living beings as systems whose identity is given by the dynamic interactions between their components at their multiple levels of composition. We posit that there cannot be an absolute homology criterion, and instead, homology should be inferred from comparisons at the levels and developmental stages where the delimitation of the compared trait lies. In this line, we argue that neural connectivity, i.e., the hodological criterion, should prevail in the determination of homologies between brain supra-cellular structures, such as the vertebrate pallium. PMID:26157357
Modeling selective attention using a neuromorphic analog VLSI device.
Indiveri, G
2000-12-01
Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.
Model Order Selection Rules for Covariance Structure Classification in Radar
Carotenuto, Vincenzo; De Maio, Antonio; Orlando, Danilo; Stoica, Petre
2017-10-01
The adaptive classification of the interference covariance matrix structure for radar signal processing applications is addressed in this paper. This represents a key issue because many detection architectures are synthesized assuming a specific covariance structure which may not necessarily coincide with the actual one due to the joint action of the system and environment uncertainties. The considered classification problem is cast in terms of a multiple hypotheses test with some nested alternatives and the theory of Model Order Selection (MOS) is exploited to devise suitable decision rules. Several MOS techniques, such as the Akaike, Takeuchi, and Bayesian information criteria are adopted and the corresponding merits and drawbacks are discussed. At the analysis stage, illustrating examples for the probability of correct model selection are presented showing the effectiveness of the proposed rules.
Autoregressive model selection with simultaneous sparse coefficient estimation
Sang, Hailin
2011-01-01
In this paper we propose a sparse coefficient estimation procedure for autoregressive (AR) models based on penalized conditional maximum likelihood. The penalized conditional maximum likelihood estimator (PCMLE) thus developed has the advantage of performing simultaneous coefficient estimation and model selection. Mild conditions are given on the penalty function and the innovation process, under which the PCMLE satisfies a strong consistency, local $N^{-1/2}$ consistency, and oracle property, respectively, where N is sample size. Two penalty functions, least absolute shrinkage and selection operator (LASSO) and smoothly clipped average deviation (SCAD), are considered as examples, and SCAD is shown to have better performances than LASSO. A simulation study confirms our theoretical results. At the end, we provide an application of our method to a historical price data of the US Industrial Production Index for consumer goods, and the result is very promising.
Parameter estimation and model selection in computational biology.
Directory of Open Access Journals (Sweden)
Gabriele Lillacci
2010-03-01
Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.
Structure and selection in an autocatalytic binary polymer model
DEFF Research Database (Denmark)
Tanaka, Shinpei; Fellermann, Harold; Rasmussen, Steen
2014-01-01
An autocatalytic binary polymer system is studied as an abstract model for a chemical reaction network capable to evolve. Due to autocatalysis, long polymers appear spontaneously and their concentration is shown to be maintained at the same level as that of monomers. When the reaction starts from....... Stability, fluctuations, and dynamic selection mechanisms are investigated for the involved self-organizing processes. Copyright (C) EPLA, 2014......An autocatalytic binary polymer system is studied as an abstract model for a chemical reaction network capable to evolve. Due to autocatalysis, long polymers appear spontaneously and their concentration is shown to be maintained at the same level as that of monomers. When the reaction starts from...
Velocity selection in the symmetric model of dendritic crystal growth
Barbieri, Angelo; Hong, Daniel C.; Langer, J. S.
1987-01-01
An analytic solution of the problem of velocity selection in a fully nonlocal model of dendritic crystal growth is presented. The analysis uses a WKB technique to derive and evaluate a solvability condition for the existence of steady-state needle-like solidification fronts in the limit of small under-cooling Delta. For the two-dimensional symmetric model with a capillary anisotropy of strength alpha, it is found that the velocity is proportional to (Delta to the 4th) times (alpha exp 7/4). The application of the method in three dimensions is also described.
Small populations corrections for selection-mutation models
Jabin, Pierre-Emmanuel
2012-01-01
We consider integro-differential models describing the evolution of a population structured by a quantitative trait. Individuals interact competitively, creating a strong selection pressure on the population. On the other hand, mutations are assumed to be small. Following the formalism of Diekmann, Jabin, Mischler, and Perthame, this creates concentration phenomena, typically consisting in a sum of Dirac masses slowly evolving in time. We propose a modification to those classical models that takes the effect of small populations into accounts and corrects some abnormal behaviours.
Process chain modeling and selection in an additive manufacturing context
DEFF Research Database (Denmark)
Thompson, Mary Kathryn; Stolfi, Alessandro; Mischkot, Michael
2016-01-01
can compete with traditional process chains for small production runs. Combining both types of technology added cost but no benefit in this case. The new process chain model can be used to explain the results and support process selection, but process chain prototyping is still important for rapidly......This paper introduces a new two-dimensional approach to modeling manufacturing process chains. This approach is used to consider the role of additive manufacturing technologies in process chains for a part with micro scale features and no internal geometry. It is shown that additive manufacturing...
Selecting, weeding, and weighting biased climate model ensembles
Jackson, C. S.; Picton, J.; Huerta, G.; Nosedal Sanchez, A.
2012-12-01
In the Bayesian formulation, the "log-likelihood" is a test statistic for selecting, weeding, or weighting climate model ensembles with observational data. This statistic has the potential to synthesize the physical and data constraints on quantities of interest. One of the thorny issues for formulating the log-likelihood is how one should account for biases. While in the past we have included a generic discrepancy term, not all biases affect predictions of quantities of interest. We make use of a 165-member ensemble CAM3.1/slab ocean climate models with different parameter settings to think through the issues that are involved with predicting each model's sensitivity to greenhouse gas forcing given what can be observed from the base state. In particular we use multivariate empirical orthogonal functions to decompose the differences that exist among this ensemble to discover what fields and regions matter to the model's sensitivity. We find that the differences that matter are a small fraction of the total discrepancy. Moreover, weighting members of the ensemble using this knowledge does a relatively poor job of adjusting the ensemble mean toward the known answer. This points out the shortcomings of using weights to correct for biases in climate model ensembles created by a selection process that does not emphasize the priorities of your log-likelihood.
Bayesian Model Selection with Network Based Diffusion Analysis.
Whalen, Andrew; Hoppitt, William J E
2016-01-01
A number of recent studies have used Network Based Diffusion Analysis (NBDA) to detect the role of social transmission in the spread of a novel behavior through a population. In this paper we present a unified framework for performing NBDA in a Bayesian setting, and demonstrate how the Watanabe Akaike Information Criteria (WAIC) can be used for model selection. We present a specific example of applying this method to Time to Acquisition Diffusion Analysis (TADA). To examine the robustness of this technique, we performed a large scale simulation study and found that NBDA using WAIC could recover the correct model of social transmission under a wide range of cases, including under the presence of random effects, individual level variables, and alternative models of social transmission. This work suggests that NBDA is an effective and widely applicable tool for uncovering whether social transmission underpins the spread of a novel behavior, and may still provide accurate results even when key model assumptions are relaxed.
Selection of productivity improvement techniques via mathematical modeling
Directory of Open Access Journals (Sweden)
Mahassan M. Khater
2011-07-01
Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.
Selection of key terrain attributes for SOC model
DEFF Research Database (Denmark)
Greve, Mogens Humlekrog; Adhikari, Kabindra; Chellasamy, Menaka
was selected, total 2,514,820 data mining models were constructed by 71 differences grid from 12m to 2304m and 22 attributes, 21 attributes derived by DTM and the original elevation. Relative importance and usage of each attributes in every model were calculated. Comprehensive impact rates of each attribute...... (standh) are the first three key terrain attributes in 5-attributes-model in all resolutions, the rest 2 of 5 attributes are Normal High (NormalH) and Valley Depth (Vall_depth) at the resolution finer than 40m, and Elevation and Channel Base (Chnl_base) coarser than 40m. The models at pixels size at 88m......As an important component of the global carbon pool, soil organic carbon (SOC) plays an important role in the global carbon cycle. SOC pool is the basic information to carry out global warming research, and needs to sustainable use of land resources. Digital terrain attributes are often use...
Unifying models for X-ray selected and Radio selected BL Lac Objects
Fossati, G; Ghisellini, G; Maraschi, L; Brera-Merate, O A
1997-01-01
We discuss alternative interpretations of the differences in the Spectral Energy Distributions (SEDs) of BL Lacs found in complete Radio or X-ray surveys. A large body of observations in different bands suggests that the SEDs of BL Lac objects appearing in X-ray surveys differ from those appearing in radio surveys mainly in having a (synchrotron) spectral cut-off (or break) at much higher frequency. In order to explain the different properties of radio and X-ray selected BL Lacs Giommi and Padovani proposed a model based on a common radio luminosity function. At each radio luminosity, objects with high frequency spectral cut-offs are assumed to be a minority. Nevertheless they dominate the X-ray selected population due to the larger X-ray-to-radio-flux ratio. An alternative model explored here (reminiscent of the orientation models previously proposed) is that the X-ray luminosity function is "primary" and that at each X-ray luminosity a minority of objects has larger radio-to-X-ray flux ratio. The prediction...
The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.
2013-01-01
Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…
The Hierarchical Sparse Selection Model of Visual Crowding
Directory of Open Access Journals (Sweden)
Wesley eChaney
2014-09-01
Full Text Available Because the environment is cluttered, objects rarely appear in isolation. The visual system must therefore attentionally select behaviorally relevant objects from among many irrelevant ones. A limit on our ability to select individual objects is revealed by the phenomenon of visual crowding: an object seen in the periphery, easily recognized in isolation, can become impossible to identify when surrounded by other, similar objects. The neural basis of crowding is hotly debated: while prevailing theories hold that crowded information is irrecoverable – destroyed due to over-integration in early-stage visual processing – recent evidence demonstrates otherwise. Crowding can occur between high-level, configural object representations, and crowded objects can contribute with high precision to judgments about the gist of a group of objects, even when they are individually unrecognizable. While existing models can account for the basic diagnostic criteria of crowding (e.g. specific critical spacing, spatial anisotropies, and temporal tuning, no present model explains how crowding can operate simultaneously at multiple levels in the visual processing hierarchy, including at the level of whole objects. Here, we present a new model of visual crowding— the hierarchical sparse selection (HSS model, which accounts for object-level crowding, as well as a number of puzzling findings in the recent literature. Counter to existing theories, we posit that crowding occurs not due to degraded visual representations in the brain, but due to impoverished sampling of visual representations for the sake of perception. The HSS model unifies findings from a disparate array of visual crowding studies and makes testable predictions about how information in crowded scenes can be accessed.
The hierarchical sparse selection model of visual crowding.
Chaney, Wesley; Fischer, Jason; Whitney, David
2014-01-01
Because the environment is cluttered, objects rarely appear in isolation. The visual system must therefore attentionally select behaviorally relevant objects from among many irrelevant ones. A limit on our ability to select individual objects is revealed by the phenomenon of visual crowding: an object seen in the periphery, easily recognized in isolation, can become impossible to identify when surrounded by other, similar objects. The neural basis of crowding is hotly debated: while prevailing theories hold that crowded information is irrecoverable - destroyed due to over-integration in early stage visual processing - recent evidence demonstrates otherwise. Crowding can occur between high-level, configural object representations, and crowded objects can contribute with high precision to judgments about the "gist" of a group of objects, even when they are individually unrecognizable. While existing models can account for the basic diagnostic criteria of crowding (e.g., specific critical spacing, spatial anisotropies, and temporal tuning), no present model explains how crowding can operate simultaneously at multiple levels in the visual processing hierarchy, including at the level of whole objects. Here, we present a new model of visual crowding-the hierarchical sparse selection (HSS) model, which accounts for object-level crowding, as well as a number of puzzling findings in the recent literature. Counter to existing theories, we posit that crowding occurs not due to degraded visual representations in the brain, but due to impoverished sampling of visual representations for the sake of perception. The HSS model unifies findings from a disparate array of visual crowding studies and makes testable predictions about how information in crowded scenes can be accessed.
A Criterion for Stability of Synchronization and Application to Coupled Chua's Systems
Institute of Scientific and Technical Information of China (English)
WANG Hai-Xia; LU Qi-Shao; WANG Qing-Yun
2009-01-01
We investigate synchronization in an array network of nearest-neighbor coupled chaotic oscillators. By using of the Lyapunov stability theory and matrix theory, a criterion for stability of complete synchronization is deduced. Meanwhile, an estimate of the critical coupling strength is obtained to ensure achieving chaos synchronization. As an example application, a model of coupled Chua's circuits with linearly bidirectional coupling is studied to verify the validity of the criterion.
Finite element model selection using Particle Swarm Optimization
Mthembu, Linda; Friswell, Michael I; Adhikari, Sondipon
2009-01-01
This paper proposes the application of particle swarm optimization (PSO) to the problem of finite element model (FEM) selection. This problem arises when a choice of the best model for a system has to be made from set of competing models, each developed a priori from engineering judgment. PSO is a population-based stochastic search algorithm inspired by the behaviour of biological entities in nature when they are foraging for resources. Each potentially correct model is represented as a particle that exhibits both individualistic and group behaviour. Each particle moves within the model search space looking for the best solution by updating the parameters values that define it. The most important step in the particle swarm algorithm is the method of representing models which should take into account the number, location and variables of parameters to be updated. One example structural system is used to show the applicability of PSO in finding an optimal FEM. An optimal model is defined as the model that has t...
Whelan, Simon; Allen, James E; Blackburne, Benjamin P; Talavera, David
2015-01-01
Molecular phylogenetics is a powerful tool for inferring both the process and pattern of evolution from genomic sequence data. Statistical approaches, such as maximum likelihood and Bayesian inference, are now established as the preferred methods of inference. The choice of models that a researcher uses for inference is of critical importance, and there are established methods for model selection conditioned on a particular type of data, such as nucleotides, amino acids, or codons. A major limitation of existing model selection approaches is that they can only compare models acting upon a single type of data. Here, we extend model selection to allow comparisons between models describing different types of data by introducing the idea of adapter functions, which project aggregated models onto the originally observed sequence data. These projections are implemented in the program ModelOMatic and used to perform model selection on 3722 families from the PANDIT database, 68 genes from an arthropod phylogenomic data set, and 248 genes from a vertebrate phylogenomic data set. For the PANDIT and arthropod data, we find that amino acid models are selected for the overwhelming majority of alignments; with progressively smaller numbers of alignments selecting codon and nucleotide models, and no families selecting RY-based models. In contrast, nearly all alignments from the vertebrate data set select codon-based models. The sequence divergence, the number of sequences, and the degree of selection acting upon the protein sequences may contribute to explaining this variation in model selection. Our ModelOMatic program is fast, with most families from PANDIT taking fewer than 150 s to complete, and should therefore be easily incorporated into existing phylogenetic pipelines. ModelOMatic is available at https://code.google.com/p/modelomatic/.
Selection of Representative Models for Decision Analysis Under Uncertainty
Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.
2016-03-01
The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.
Directory of Open Access Journals (Sweden)
Ana Pilipović
2014-03-01
Full Text Available Additive manufacturing (AM is increasingly applied in the development projects from the initial idea to the finished product. The reasons are multiple, but what should be emphasised is the possibility of relatively rapid manufacturing of the products of complicated geometry based on the computer 3D model of the product. There are numerous limitations primarily in the number of available materials and their properties, which may be quite different from the properties of the material of the finished product. Therefore, it is necessary to know the properties of the product materials. In AM procedures the mechanical properties of materials are affected by the manufacturing procedure and the production parameters. During SLS procedures it is possible to adjust various manufacturing parameters which are used to influence the improvement of various mechanical and other properties of the products. The paper sets a new mathematical model to determine the influence of individual manufacturing parameters on the polymer product made by selective laser sintering. Old mathematical model is checked by statistical method with central composite plan and it is established that old mathematical model must be expanded with new parameter beam overlay ratio. Verification of new mathematical model and optimization of the processing parameters are made on SLS machine.
Selecting global climate models for regional climate change studies.
Pierce, David W; Barnett, Tim P; Santer, Benjamin D; Gleckler, Peter J
2009-05-26
Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simulated regional climate. Accordingly, 42 performance metrics based on seasonal temperature and precipitation, the El Nino/Southern Oscillation (ENSO), and the Pacific Decadal Oscillation are constructed and applied to 21 global models. However, no strong relationship is found between the score of the models on the metrics and results of the D&A analysis. Instead, the importance of having ensembles of runs with enough realizations to reduce the effects of natural internal climate variability is emphasized. Also, the superiority of the multimodel ensemble average (MM) to any 1 individual model, already found in global studies examining the mean climate, is true in this regional study that includes measures of variability as well. Evidence is shown that this superiority is largely caused by the cancellation of offsetting errors in the individual global models. Results with both the MM and models picked randomly confirm the original D&A results of anthropogenically forced JFM temperature changes in the western U.S. Future projections of temperature do not depend on model performance until the 2080s, after which the better performing models show warmer temperatures.
Selecting global climate models for regional climate change studies
Pierce, David W.; Barnett, Tim P.; Santer, Benjamin D.; Gleckler, Peter J.
2009-01-01
Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simulated regional climate. Accordingly, 42 performance metrics based on seasonal temperature and precipitation, the El Nino/Southern Oscillation (ENSO), and the Pacific Decadal Oscillation are constructed and applied to 21 global models. However, no strong relationship is found between the score of the models on the metrics and results of the D&A analysis. Instead, the importance of having ensembles of runs with enough realizations to reduce the effects of natural internal climate variability is emphasized. Also, the superiority of the multimodel ensemble average (MM) to any 1 individual model, already found in global studies examining the mean climate, is true in this regional study that includes measures of variability as well. Evidence is shown that this superiority is largely caused by the cancellation of offsetting errors in the individual global models. Results with both the MM and models picked randomly confirm the original D&A results of anthropogenically forced JFM temperature changes in the western U.S. Future projections of temperature do not depend on model performance until the 2080s, after which the better performing models show warmer temperatures. PMID:19439652
Multilevel selection in a resource-based model
Ferreira, Fernando Fagundes; Campos, Paulo R. A.
2013-07-01
In the present work we investigate the emergence of cooperation in a multilevel selection model that assumes limiting resources. Following the work by R. J. Requejo and J. Camacho [Phys. Rev. Lett.0031-900710.1103/PhysRevLett.108.038701 108, 038701 (2012)], the interaction among individuals is initially ruled by a prisoner's dilemma (PD) game. The payoff matrix may change, influenced by the resource availability, and hence may also evolve to a non-PD game. Furthermore, one assumes that the population is divided into groups, whose local dynamics is driven by the payoff matrix, whereas an intergroup competition results from the nonuniformity of the growth rate of groups. We study the probability that a single cooperator can invade and establish in a population initially dominated by defectors. Cooperation is strongly favored when group sizes are small. We observe the existence of a critical group size beyond which cooperation becomes counterselected. Although the critical size depends on the parameters of the model, it is seen that a saturation value for the critical group size is achieved. The results conform to the thought that the evolutionary history of life repeatedly involved transitions from smaller selective units to larger selective units.
A Reliability Based Model for Wind Turbine Selection
Directory of Open Access Journals (Sweden)
A.K. Rajeevan
2013-06-01
Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.