WorldWideScience

Sample records for model selection procedures

  1. Model and Variable Selection Procedures for Semiparametric Time Series Regression

    Directory of Open Access Journals (Sweden)

    Risa Kato

    2009-01-01

    Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.

  2. A MODEL SELECTION PROCEDURE IN MIXTURE-PROCESS EXPERIMENTS FOR INDUSTRIAL PROCESS OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Márcio Nascimento de Souza Leão

    2015-08-01

    Full Text Available We present a model selection procedure for use in Mixture and Mixture-Process Experiments. Certain combinations of restrictions on the proportions of the mixture components can result in a very constrained experimental region. This results in collinearity among the covariates of the model, which can make it difficult to fit the model using the traditional method based on the significance of the coefficients. For this reason, a model selection methodology based on information criteria will be proposed for process optimization. Two examples are presented to illustrate this model selection procedure.

  3. Penalized variable selection procedure for Cox models with semiparametric relative risk

    CERN Document Server

    Du, Pang; Liang, Hua; 10.1214/09-AOS780

    2010-01-01

    We study the Cox models with semiparametric relative risk, which can be partially linear with one nonparametric component, or multiple additive or nonadditive nonparametric components. A penalized partial likelihood procedure is proposed to simultaneously estimate the parameters and select variables for both the parametric and the nonparametric parts. Two penalties are applied sequentially. The first penalty, governing the smoothness of the multivariate nonlinear covariate effect function, provides a smoothing spline ANOVA framework that is exploited to derive an empirical model selection tool for the nonparametric part. The second penalty, either the smoothly-clipped-absolute-deviation (SCAD) penalty or the adaptive LASSO penalty, achieves variable selection in the parametric part. We show that the resulting estimator of the parametric part possesses the oracle property, and that the estimator of the nonparametric part achieves the optimal rate of convergence. The proposed procedures are shown to work well i...

  4. The selective therapeutic apheresis procedures.

    Science.gov (United States)

    Sanchez, Amber P; Cunard, Robyn; Ward, David M

    2013-02-01

    Selective apheresis procedures have been developed to target specific molecules, antibodies, or cellular elements in a variety of diseases. The advantage of the selective apheresis procedures over conventional therapeutic plasmapheresis is preservation of other essential plasma components such as albumin, immunoglobulins, and clotting factors. These procedures are more commonly employed in Europe and Japan, and few are available in the USA. Apheresis procedures discussed in this review include the various technologies available for low-density lipoprotein (LDL) apheresis, double filtration plasmapheresis (DFPP), cryofiltration, immunoadsorption procedures, adsorption resins that process plasma, extracorporeal photopheresis, and leukocyte apheresis.

  5. Developing a spatial-statistical model and map of historical malaria prevalence in Botswana using a staged variable selection procedure

    Directory of Open Access Journals (Sweden)

    Mabaso Musawenkosi LH

    2007-09-01

    Full Text Available Abstract Background Several malaria risk maps have been developed in recent years, many from the prevalence of infection data collated by the MARA (Mapping Malaria Risk in Africa project, and using various environmental data sets as predictors. Variable selection is a major obstacle due to analytical problems caused by over-fitting, confounding and non-independence in the data. Testing and comparing every combination of explanatory variables in a Bayesian spatial framework remains unfeasible for most researchers. The aim of this study was to develop a malaria risk map using a systematic and practicable variable selection process for spatial analysis and mapping of historical malaria risk in Botswana. Results Of 50 potential explanatory variables from eight environmental data themes, 42 were significantly associated with malaria prevalence in univariate logistic regression and were ranked by the Akaike Information Criterion. Those correlated with higher-ranking relatives of the same environmental theme, were temporarily excluded. The remaining 14 candidates were ranked by selection frequency after running automated step-wise selection procedures on 1000 bootstrap samples drawn from the data. A non-spatial multiple-variable model was developed through step-wise inclusion in order of selection frequency. Previously excluded variables were then re-evaluated for inclusion, using further step-wise bootstrap procedures, resulting in the exclusion of another variable. Finally a Bayesian geo-statistical model using Markov Chain Monte Carlo simulation was fitted to the data, resulting in a final model of three predictor variables, namely summer rainfall, mean annual temperature and altitude. Each was independently and significantly associated with malaria prevalence after allowing for spatial correlation. This model was used to predict malaria prevalence at unobserved locations, producing a smooth risk map for the whole country. Conclusion We have

  6. A general procedure to generate models for urban environmental-noise pollution using feature selection and machine learning methods.

    Science.gov (United States)

    Torija, Antonio J; Ruiz, Diego P

    2015-02-01

    The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Used-habitat calibration plots: A new procedure for validating species distribution, resource selection, and step-selection models

    Science.gov (United States)

    Fieberg, John R.; Forester, James D.; Street, Garrett M.; Johnson, Douglas H.; ArchMiller, Althea A.; Matthiopoulos, Jason

    2017-01-01

    “Species distribution modeling” was recently ranked as one of the top five “research fronts” in ecology and the environmental sciences by ISI's Essential Science Indicators (Renner and Warton 2013), reflecting the importance of predicting how species distributions will respond to anthropogenic change. Unfortunately, species distribution models (SDMs) often perform poorly when applied to novel environments. Compounding on this problem is the shortage of methods for evaluating SDMs (hence, we may be getting our predictions wrong and not even know it). Traditional methods for validating SDMs quantify a model's ability to classify locations as used or unused. Instead, we propose to focus on how well SDMs can predict the characteristics of used locations. This subtle shift in viewpoint leads to a more natural and informative evaluation and validation of models across the entire spectrum of SDMs. Through a series of examples, we show how simple graphical methods can help with three fundamental challenges of habitat modeling: identifying missing covariates, non-linearity, and multicollinearity. Identifying habitat characteristics that are not well-predicted by the model can provide insights into variables affecting the distribution of species, suggest appropriate model modifications, and ultimately improve the reliability and generality of conservation and management recommendations.

  8. 45 CFR 1217.4 - Selection procedure.

    Science.gov (United States)

    2010-10-01

    ... Director's review. (b) Selection. VISTA volunteer leaders will be selected by the Regional Director (or his... VISTA VOLUNTEER LEADER § 1217.4 Selection procedure. (a) Nomination. Candidates may be nominated in writing to the Regional Director by the Program Officer or the State Program Director in whose area the...

  9. Reasons for being selective when choosing personnel selection procedures

    NARCIS (Netherlands)

    König, C.J.; Klehe, U.-C.; Berchtold, M.; Kleinmann, M.

    2010-01-01

    The scientist-practitioner gap in personnel selection is large. Thus, it is important to gain a better understanding of the reasons that make organizations use or not use certain selection procedures. Based on institutional theory, we predicted that six variables should determine the use of selectio

  10. Reasons for being selective when choosing personnel selection procedures

    NARCIS (Netherlands)

    König, C.J.; Klehe, U.-C.; Berchtold, M.; Kleinmann, M.

    2010-01-01

    The scientist-practitioner gap in personnel selection is large. Thus, it is important to gain a better understanding of the reasons that make organizations use or not use certain selection procedures. Based on institutional theory, we predicted that six variables should determine the use of

  11. Simulation of psychophysical stimulus selection procedures for dynamic threshold tracking

    NARCIS (Netherlands)

    Doll, Robert; Yang, H.; Meijer, Hil Gaétan Ellart; Buitenweg, Jan R.

    2011-01-01

    Stimulus selection procedures are of importance for adequate psychophysical nociceptive threshold estimation. Various stimulus selection procedures were analyzed by means of simulations. Precision, bias, efficiency, and time constants of the various stimulus selection procedures were determined in a

  12. [The systematic selection of speech audiometric procedures].

    Science.gov (United States)

    Steffens, T

    2017-03-01

    The impact of hearing loss on the ability to participate in verbal communication can be directly quantified through the use of speech audiometry. Advances in technology and the associated reduction in background noise interference for hearing aids have allowed the reproduction of very complex acoustic environments, analogous to those in which conversations occur in daily life. These capabilities have led to the creation of numerous advanced speech audiometry measures, test procedures and environments, far beyond the presentation of isolated words in an otherwise noise-free testing booth. The aim of this study was to develop a set of systematic criteria for the appropriate selection of speech audiometric material, which are presented in this article in relationship to the most widely used test procedures. Before an appropriate speech test can be selected from the numerous procedures available, the precise aims of the evaluation should be basically defined. Specific test characteristics, such as validity, objectivity, reliability and sensitivity are important for the selection of the correct test for the specific goals. A concrete understanding of the goals of the evaluation as well as of specific test criteria play a crucial role in the selection of speech audiometry testing procedures.

  13. An iterative sensory procedure to select odor-active associations in complex consortia of microorganisms: application to the construction of a cheese model.

    Science.gov (United States)

    Bonaïti, C; Irlinger, F; Spinnler, H E; Engel, E

    2005-05-01

    The aim of this study was to develop and validate an iterative procedure based on odor assessment to select odor-active associations of microorganisms from a starting association of 82 strains (G1), which were chosen to be representative of Livarot cheese biodiversity. A 3-step dichotomous procedure was applied to reduce the starting association G1. At each step, 3 methods were used to evaluate the odor proximity between mother (n strains) and daughter (n/2 strains) associations: a direct assessment of odor dissimilarity using an original bidimensional scale system and 2 indirect methods based on comparisons of odor profile or hedonic scores. Odor dissimilarity ratings and odor profile gave reliable and sometimes complementary criteria to select G3 and G4 at the first iteration, G31 and G42 at the second iteration, and G312 and G421 at the final iteration. Principal component analysis of odor profile data permitted the interpretation at least in part, of the 2D multidimensional scaling representation of the similarity data. The second part of the study was dedicated to 1) validating the choice of the dichotomous procedure made at each iteration, and 2) evaluating together the magnitude of odor differences that may exist between G1 and its subsequent simplified associations. The strategy consisted of assessing odor similarity between the 13 cheese models by comparing the contents of their odor-active compounds. By using a purge-and-trap gas chromatography-olfactory/mass spectrometry device, 50 potent odorants were identified in models G312, G421, and in a typical Protected Denomination of Origin Livarot cheese. Their contributions to the odor profile of both selected model cheeses are discussed. These compounds were quantified by purge and trap-gas chromatography-mass spectrometry in the 13 products and the normalized data matrix was transformed to a between-product distance matrix. This instrumental assessment of odor similarities allowed validation of the choice

  14. 28 CFR 50.14 - Guidelines on employee selection procedures.

    Science.gov (United States)

    2010-07-01

    ... credentials of sellers, users, or consultants; and other nonempirical or anecdotal accounts of selection... close review. The appropriateness of a selection procedure is best evaluated in each particular...

  15. Selective Maintenance Model Considering Time Uncertainty

    OpenAIRE

    Le Chen; Zhengping Shu; Yuan Li; Xuezhi Lv

    2012-01-01

    This study proposes a selective maintenance model for weapon system during mission interval. First, it gives relevant definitions and operational process of material support system. Then, it introduces current research on selective maintenance modeling. Finally, it establishes numerical model for selecting corrective and preventive maintenance tasks, considering time uncertainty brought by unpredictability of maintenance procedure, indetermination of downtime for spares and difference of skil...

  16. Model Selection for Geostatistical Models

    Energy Technology Data Exchange (ETDEWEB)

    Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.

    2006-02-01

    We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.

  17. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    Energy Technology Data Exchange (ETDEWEB)

    Louit, D.M. [Komatsu Chile, Av. Americo Vespucio 0631, Quilicura, Santiago (Chile)], E-mail: rpascual@ing.puc.cl; Pascual, R. [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna Mackenna 4860, Santiago (Chile); Jardine, A.K.S. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ont., M5S 3G8 (Canada)

    2009-10-15

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  18. The selected response procedure: a variation on Appelbaum's altered atmosphere procedure for the Rorschach.

    Science.gov (United States)

    Jaffe, L

    1988-01-01

    This article introduces the Selected Response Procedure, which is a supplementary technique for expanding the scope of the Rorschach test. The procedure is conducted as follows: After the standard administration of the Rorschach test, patients are asked to look through all of the cards a second time and select one more response from any card of their choice. A rationale for this procedure is developed through a comparison to another supplementary Rorschach technique, the Altered Atmosphere Procedure. The importance of understanding the selected response within a theoretical framework, as well as the clinical context of each selected response, is highlighted by a clinical example using object relations theory. Finally, a number of didactic questions are offered as potential ways to query the possible meaning of selected responses.

  19. Recruiter Selection Model

    Science.gov (United States)

    2006-05-01

    interests include feature selection, statistical learning, multivariate statistics, market research, and classification. He may be contacted at...current youth market , and reducing barriers to Army enlistment. Part of the Army Recruiting Initiatives was the creation of a recruiter selection...Selection Model DevelPed by the Openuier Reseach Crate of E...lneSstm Erapseeeng Depce-teo, WViitd Ntt. siliec Academy, NW..t Point, 271 Weau/’itt 21M

  20. MODEL SELECTION FOR LOG-LINEAR MODELS OF CONTINGENCY TABLES

    Institute of Scientific and Technical Information of China (English)

    ZHAO Lincheng; ZHANG Hong

    2003-01-01

    In this paper, we propose an information-theoretic-criterion-based model selection procedure for log-linear model of contingency tables under multinomial sampling, and establish the strong consistency of the method under some mild conditions. An exponential bound of miss detection probability is also obtained. The selection procedure is modified so that it can be used in practice. Simulation shows that the modified method is valid. To avoid selecting the penalty coefficient in the information criteria, an alternative selection procedure is given.

  1. 40 CFR 24.08 - Selection of appropriate hearing procedures.

    Science.gov (United States)

    2010-07-01

    ... complex and are necessary to protect human health and the environment prior to development of a permanent... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Selection of appropriate hearing procedures. 24.08 Section 24.08 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL...

  2. New rules for visual selection: Isolating procedural attention.

    Science.gov (United States)

    Ramamurthy, Mahalakshmi; Blaser, Erik

    2017-02-01

    High performance in well-practiced, everyday tasks-driving, sports, gaming-suggests a kind of procedural attention that can allocate processing resources to behaviorally relevant information in an unsupervised manner. Here we show that training can lead to a new, automatic attentional selection rule that operates in the absence of bottom-up, salience-driven triggers and willful top-down selection. Taking advantage of the fact that attention modulates motion aftereffects, observers were presented with a bivectorial display with overlapping, iso-salient red and green dot fields moving to the right and left, respectively, while distracted by a demanding auditory two-back memory task. Before training, since the motion vectors canceled each other out, no net motion aftereffect (MAE) was found. However, after 3 days (0.5 hr/day) of training, during which observers practiced selectively attending to the red, rightward field, a significant net MAE was observed-even when top-down selection was again distracted. Further experiments showed that these results were not due to perceptual learning, and that the new rule targeted the motion, and not the color of the target dot field, and global, not local, motion signals; thus, the new rule was: "select the rightward field." This study builds on recent work on selection history-driven and reward-driven biases, but uses a novel paradigm where the allocation of visual processing resources are measured passively, offline, and when the observer's ability to execute top-down selection is defeated.

  3. Applying Modeling Tools to Ground System Procedures

    Science.gov (United States)

    Di Pasquale, Peter

    2012-01-01

    As part of a long-term effort to revitalize the Ground Systems (GS) Engineering Section practices, Systems Modeling Language (SysML) and Business Process Model and Notation (BPMN) have been used to model existing GS products and the procedures GS engineers use to produce them.

  4. A Bayesian variable selection procedure to rank overlapping gene sets

    Directory of Open Access Journals (Sweden)

    Skarman Axel

    2012-05-01

    Full Text Available Abstract Background Genome-wide expression profiling using microarrays or sequence-based technologies allows us to identify genes and genetic pathways whose expression patterns influence complex traits. Different methods to prioritize gene sets, such as the genes in a given molecular pathway, have been described. In many cases, these methods test one gene set at a time, and therefore do not consider overlaps among the pathways. Here, we present a Bayesian variable selection method to prioritize gene sets that overcomes this limitation by considering all gene sets simultaneously. We applied Bayesian variable selection to differential expression to prioritize the molecular and genetic pathways involved in the responses to Escherichia coli infection in Danish Holstein cows. Results We used a Bayesian variable selection method to prioritize Kyoto Encyclopedia of Genes and Genomes pathways. We used our data to study how the variable selection method was affected by overlaps among the pathways. In addition, we compared our approach to another that ignores the overlaps, and studied the differences in the prioritization. The variable selection method was robust to a change in prior probability and stable given a limited number of observations. Conclusions Bayesian variable selection is a useful way to prioritize gene sets while considering their overlaps. Ignoring the overlaps gives different and possibly misleading results. Additional procedures may be needed in cases of highly overlapping pathways that are hard to prioritize.

  5. Procedural Modeling for Digital Cultural Heritage

    Directory of Open Access Journals (Sweden)

    Müller Pascal

    2009-01-01

    Full Text Available The rapid development of computer graphics and imaging provides the modern archeologist with several tools to realistically model and visualize archeological sites in 3D. This, however, creates a tension between veridical and realistic modeling. Visually compelling models may lead people to falsely believe that there exists very precise knowledge about the past appearance of a site. In order to make the underlying uncertainty visible, it has been proposed to encode this uncertainty with different levels of transparency in the rendering, or of decoloration of the textures. We argue that procedural modeling technology based on shape grammars provides an interesting alternative to such measures, as they tend to spoil the experience for the observer. Both its efficiency and compactness make procedural modeling a tool to produce multiple models, which together sample the space of possibilities. Variations between the different models express levels of uncertainty implicitly, while letting each individual model keeping its realistic appearance. The underlying, structural description makes the uncertainty explicit. Additionally, procedural modeling also yields the flexibility to incorporate changes as knowledge of an archeological site gets refined. Annotations explaining modeling decisions can be included. We demonstrate our procedural modeling implementation with several recent examples.

  6. Complexity regularized hydrological model selection

    NARCIS (Netherlands)

    Pande, S.; Arkesteijn, L.; Bastidas, L.A.

    2014-01-01

    This paper uses a recently proposed measure of hydrological model complexity in a model selection exercise. It demonstrates that a robust hydrological model is selected by penalizing model complexity while maximizing a model performance measure. This especially holds when limited data is available.

  7. Complexity regularized hydrological model selection

    NARCIS (Netherlands)

    Pande, S.; Arkesteijn, L.; Bastidas, L.A.

    2014-01-01

    This paper uses a recently proposed measure of hydrological model complexity in a model selection exercise. It demonstrates that a robust hydrological model is selected by penalizing model complexity while maximizing a model performance measure. This especially holds when limited data is available.

  8. Individual Influence on Model Selection

    Science.gov (United States)

    Sterba, Sonya K.; Pek, Jolynn

    2012-01-01

    Researchers in psychology are increasingly using model selection strategies to decide among competing models, rather than evaluating the fit of a given model in isolation. However, such interest in model selection outpaces an awareness that one or a few cases can have disproportionate impact on the model ranking. Though case influence on the fit…

  9. Selected Logistics Models and Techniques.

    Science.gov (United States)

    1984-09-01

    ACCESS PROCEDURE: On-Line System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease...System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease arrangement. • SPONSOR: ASD/ACCC

  10. Procedural Personas for Player Decision Modeling and Procedural Content Generation

    DEFF Research Database (Denmark)

    Holmgård, Christoffer

    2016-01-01

    in specific games. It further explores how simple utility functions, easily defined and changed by game designers, can be used to construct agents expressing a variety of decision making styles within a game, using a variety of contemporary AI approaches, naming the resulting agents "Procedural Personas......." These methods for constructing procedural personas are then integrated with existing procedural content generation systems, acting as critics that shape the output of these systems, optimizing generated content for different personas and by extension, different kinds of players and their decision making styles...

  11. A Procedural Model for Process Improvement Projects

    OpenAIRE

    Kreimeyer, Matthias;Daniilidis, Charampos;Lindemann, Udo

    2017-01-01

    Process improvement projects are of a complex nature. It is therefore necessary to use experience and knowledge gained in previous projects when executing a new project. Yet, there are few pragmatic planning aids, and transferring the institutional knowledge from one project to the next is difficult. This paper proposes a procedural model that extends common models for project planning to enable staff on a process improvement project to adequately plan their projects, enabling them to documen...

  12. A procedure for building product models

    DEFF Research Database (Denmark)

    Hvam, Lars; Riis, Jesper; Malis, Martin

    2001-01-01

    with product models. The next phase includes an analysis of the product assortment, and the set up of a so-called product master. Finally the product model is designed and implemented using object oriented modelling. The procedure is developed in order to ensure that the product models constructed are fit...... for the business processes they support, and properly structured and documented, in order to facilitate that the systems can be maintained continually and further developed. The research has been carried out at the Centre for Industrialisation of Engineering, Department of Manufacturing Engineering, Technical...

  13. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  14. Adaptive Covariance Estimation with model selection

    CERN Document Server

    Biscay, Rolando; Loubes, Jean-Michel

    2012-01-01

    We provide in this paper a fully adaptive penalized procedure to select a covariance among a collection of models observing i.i.d replications of the process at fixed observation points. For this we generalize previous results of Bigot and al. and propose to use a data driven penalty to obtain an oracle inequality for the estimator. We prove that this method is an extension to the matricial regression model of the work by Baraud.

  15. Procedural Personas for Player Decision Modeling and Procedural Content Generation

    DEFF Research Database (Denmark)

    Holmgård, Christoffer

    2016-01-01

    ." These methods for constructing procedural personas are then integrated with existing procedural content generation systems, acting as critics that shape the output of these systems, optimizing generated content for different personas and by extension, different kinds of players and their decision making styles....... This thesis explores methods for creating low-complexity, easily interpretable, generative AI agents for use in game and simulation design. Based on insights from decision theory and behavioral economics, the thesis investigates how player decision making styles may be defined, operationalised, and measured...... in specific games. It further explores how simple utility functions, easily defined and changed by game designers, can be used to construct agents expressing a variety of decision making styles within a game, using a variety of contemporary AI approaches, naming the resulting agents "Procedural Personas...

  16. The HSG procedure for modelling integrated urban wastewater systems.

    Science.gov (United States)

    Muschalla, D; Schütze, M; Schroeder, K; Bach, M; Blumensaat, F; Gruber, G; Klepiszewski, K; Pabst, M; Pressl, A; Schindler, N; Solvi, A-M; Wiese, J

    2009-01-01

    Whilst the importance of integrated modelling of urban wastewater systems is ever increasing, there is still no concise procedure regarding how to carry out such modelling studies. After briefly discussing some earlier approaches, the guideline for integrated modelling developed by the Central European Simulation Research Group (HSG - Hochschulgruppe) is presented. This contribution suggests a six-step standardised procedure to integrated modelling. This commences with an analysis of the system and definition of objectives and criteria, covers selection of modelling approaches, analysis of data availability, calibration and validation and also includes the steps of scenario analysis and reporting. Recent research findings as well as experience gained from several application projects from Central Europe have been integrated in this guideline.

  17. Selected System Models

    Science.gov (United States)

    Schmidt-Eisenlohr, F.; Puñal, O.; Klagges, K.; Kirsche, M.

    Apart from the general issue of modeling the channel, the PHY and the MAC of wireless networks, there are specific modeling assumptions that are considered for different systems. In this chapter we consider three specific wireless standards and highlight modeling options for them. These are IEEE 802.11 (as example for wireless local area networks), IEEE 802.16 (as example for wireless metropolitan networks) and IEEE 802.15 (as example for body area networks). Each section on these three systems discusses also at the end a set of model implementations that are available today.

  18. Transport Simulation Model Calibration with Two-Step Cluster Analysis Procedure

    Directory of Open Access Journals (Sweden)

    Zenina Nadezda

    2015-12-01

    Full Text Available The calibration results of transport simulation model depend on selected parameters and their values. The aim of the present paper is to calibrate a transport simulation model by a two-step cluster analysis procedure to improve the reliability of simulation model results. Two global parameters have been considered: headway and simulation step. Normal, uniform and exponential headway generation models have been selected for headway. Application of two-step cluster analysis procedure to the calibration procedure has allowed reducing time needed for simulation step and headway generation model value selection.

  19. Procedural Optimization Models for Multiobjective Flexible JSSP

    Directory of Open Access Journals (Sweden)

    Elena Simona NICOARA

    2013-01-01

    Full Text Available The most challenging issues related to manufacturing efficiency occur if the jobs to be sched-uled are structurally different, if these jobs allow flexible routings on the equipments and mul-tiple objectives are required. This framework, called Multi-objective Flexible Job Shop Scheduling Problems (MOFJSSP, applicable to many real processes, has been less reported in the literature than the JSSP framework, which has been extensively formalized, modeled and analyzed from many perspectives. The MOFJSSP lie, as many other NP-hard problems, in a tedious place where the vast optimization theory meets the real world context. The paper brings to discussion the most optimization models suited to MOFJSSP and analyzes in detail the genetic algorithms and agent-based models as the most appropriate procedural models.

  20. Launch vehicle selection model

    Science.gov (United States)

    Montoya, Alex J.

    1990-01-01

    Over the next 50 years, humans will be heading for the Moon and Mars to build scientific bases to gain further knowledge about the universe and to develop rewarding space activities. These large scale projects will last many years and will require large amounts of mass to be delivered to Low Earth Orbit (LEO). It will take a great deal of planning to complete these missions in an efficient manner. The planning of a future Heavy Lift Launch Vehicle (HLLV) will significantly impact the overall multi-year launching cost for the vehicle fleet depending upon when the HLLV will be ready for use. It is desirable to develop a model in which many trade studies can be performed. In one sample multi-year space program analysis, the total launch vehicle cost of implementing the program reduced from 50 percent to 25 percent. This indicates how critical it is to reduce space logistics costs. A linear programming model has been developed to answer such questions. The model is now in its second phase of development, and this paper will address the capabilities of the model and its intended uses. The main emphasis over the past year was to make the model user friendly and to incorporate additional realistic constraints that are difficult to represent mathematically. We have developed a methodology in which the user has to be knowledgeable about the mission model and the requirements of the payloads. We have found a representation that will cut down the solution space of the problem by inserting some preliminary tests to eliminate some infeasible vehicle solutions. The paper will address the handling of these additional constraints and the methodology for incorporating new costing information utilizing learning curve theory. The paper will review several test cases that will explore the preferred vehicle characteristics and the preferred period of construction, i.e., within the next decade, or in the first decade of the next century. Finally, the paper will explore the interaction

  1. Los Angeles Community College District: It Has Improved Its Procedures for Selecting College Presidents.

    Science.gov (United States)

    California State Office of the Auditor General, Sacramento.

    This auditor's report reviews the procedures used by the Los Angeles Community College District (California) to select its college presidents. In September 1999 the district revised its procedures by designating a person who is solely responsible for ensuring compliance with board procedure, establishing timelines for the selection process, and…

  2. Model Selection Principles in Misspecified Models

    CERN Document Server

    Lv, Jinchi

    2010-01-01

    Model selection is of fundamental importance to high dimensional modeling featured in many contemporary applications. Classical principles of model selection include the Kullback-Leibler divergence principle and the Bayesian principle, which lead to the Akaike information criterion and Bayesian information criterion when models are correctly specified. Yet model misspecification is unavoidable when we have no knowledge of the true model or when we have the correct family of distributions but miss some true predictor. In this paper, we propose a family of semi-Bayesian principles for model selection in misspecified models, which combine the strengths of the two well-known principles. We derive asymptotic expansions of the semi-Bayesian principles in misspecified generalized linear models, which give the new semi-Bayesian information criteria (SIC). A specific form of SIC admits a natural decomposition into the negative maximum quasi-log-likelihood, a penalty on model dimensionality, and a penalty on model miss...

  3. Bayesian Model Selection and Statistical Modeling

    CERN Document Server

    Ando, Tomohiro

    2010-01-01

    Bayesian model selection is a fundamental part of the Bayesian statistical modeling process. The quality of these solutions usually depends on the goodness of the constructed Bayesian model. Realizing how crucial this issue is, many researchers and practitioners have been extensively investigating the Bayesian model selection problem. This book provides comprehensive explanations of the concepts and derivations of the Bayesian approach for model selection and related criteria, including the Bayes factor, the Bayesian information criterion (BIC), the generalized BIC, and the pseudo marginal lik

  4. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.

    2012-03-01

    Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.

  5. Robust estimation procedure in panel data model

    Energy Technology Data Exchange (ETDEWEB)

    Shariff, Nurul Sima Mohamad [Faculty of Science of Technology, Universiti Sains Islam Malaysia (USIM), 71800, Nilai, Negeri Sembilan (Malaysia); Hamzah, Nor Aishah [Institute of Mathematical Sciences, Universiti Malaya, 50630, Kuala Lumpur (Malaysia)

    2014-06-19

    The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependence is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.

  6. Introduction. Modelling natural action selection.

    Science.gov (United States)

    Prescott, Tony J; Bryson, Joanna J; Seth, Anil K

    2007-09-29

    Action selection is the task of resolving conflicts between competing behavioural alternatives. This theme issue is dedicated to advancing our understanding of the behavioural patterns and neural substrates supporting action selection in animals, including humans. The scope of problems investigated includes: (i) whether biological action selection is optimal (and, if so, what is optimized), (ii) the neural substrates for action selection in the vertebrate brain, (iii) the role of perceptual selection in decision-making, and (iv) the interaction of group and individual action selection. A second aim of this issue is to advance methodological practice with respect to modelling natural action section. A wide variety of computational modelling techniques are therefore employed ranging from formal mathematical approaches through to computational neuroscience, connectionism and agent-based modelling. The research described has broad implications for both natural and artificial sciences. One example, highlighted here, is its application to medical science where models of the neural substrates for action selection are contributing to the understanding of brain disorders such as Parkinson's disease, schizophrenia and attention deficit/hyperactivity disorder.

  7. A Multi-objective Procedure for Efficient Regression Modeling

    CERN Document Server

    Sinha, Ankur; Kuosmanen, Timo

    2012-01-01

    Variable selection is recognized as one of the most critical steps in statistical modeling. The problems encountered in engineering and social sciences are commonly characterized by over-abundance of explanatory variables, non-linearities and unknown interdependencies between the regressors. An added difficulty is that the analysts may have little or no prior knowledge on the relative importance of the variables. To provide a robust method for model selection, this paper introduces a technique called the Multi-objective Genetic Algorithm for Variable Selection (MOGA-VS) which provides the user with an efficient set of regression models for a given data-set. The algorithm considers the regression problem as a two objective task, where the purpose is to choose those models over the other which have less number of regression coefficients and better goodness of fit. In MOGA-VS, the model selection procedure is implemented in two steps. First, we generate the frontier of all efficient or non-dominated regression m...

  8. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  9. Bayesian model selection in Gaussian regression

    CERN Document Server

    Abramovich, Felix

    2009-01-01

    We consider a Bayesian approach to model selection in Gaussian linear regression, where the number of predictors might be much larger than the number of observations. From a frequentist view, the proposed procedure results in the penalized least squares estimation with a complexity penalty associated with a prior on the model size. We investigate the optimality properties of the resulting estimator. We establish the oracle inequality and specify conditions on the prior that imply its asymptotic minimaxity within a wide range of sparse and dense settings for "nearly-orthogonal" and "multicollinear" designs.

  10. A Bayesian variable selection procedure for ranking overlapping gene sets

    DEFF Research Database (Denmark)

    Skarman, Axel; Mahdi Shariati, Mohammad; Janss, Luc

    2012-01-01

    described. In many cases, these methods test one gene set at a time, and therefore do not consider overlaps among the pathways. Here, we present a Bayesian variable selection method to prioritize gene sets that overcomes this limitation by considering all gene sets simultaneously. We applied Bayesian...... variable selection to differential expression to prioritize the molecular and genetic pathways involved in the responses to Escherichia coli infection in Danish Holstein cows. Results We used a Bayesian variable selection method to prioritize Kyoto Encyclopedia of Genes and Genomes pathways. We used our...... data to study how the variable selection method was affected by overlaps among the pathways. In addition, we compared our approach to another that ignores the overlaps, and studied the differences in the prioritization. The variable selection method was robust to a change in prior probability...

  11. Compositional Analysis Procedures for Selected Elastomers Used in Sonar Transducers

    Science.gov (United States)

    1987-03-16

    in ASTM D297 , Method 54.* Permitted deviations from this procedure include the use of boric acid solution as the trapping medium with subsequent...ratio, 3,788. * D297 -81, Standard Methods for Rubber Products—Chemical Analysis," Annual Book of ASTM Standards. Part 37. **D3533-76, "Standard

  12. An item selection procedure to maximise scale reliability and validity

    Directory of Open Access Journals (Sweden)

    J. Raubenheimer

    2004-10-01

    Full Text Available Wille (1996 proposed an item selection strategy which may be used to maximise, first, the internal consistency and, next, the convergent and discriminant validity of items in multi-dimensional Likert-type questionnaires or scales. In terms of his strategy, the latter aspects of validity are maximised by means of exploratory factor analyses. In this article, it is done by means of Tateneni, Mels, Cudeck and Browne’s (2001 Comprehensive Exploratory Factor Analysis (CEFA program which implements exploratory factor analysis, but provides the advantages of standard confirmatory factor analysis (e.g., the computation of the standard errors of the rotated factor loadings and measures of “model" fit. The benefits that accrue by using this incremental approach are demonstrated in terms of Allport and Ross’ (1967 Religious Orientation Scale, a widely-used psychological instrument. Opsomming Wille (1996 het ’n itemseleksiestrategie voorgestel om eerstens die interne konsekwentheid, en tweedens die konvergente en divergente geldigheid van items in multidimensionele Likert-tipe vraelyste of skale te maksimeer. Volgens sy strategie word laasgenoemde aspekte van geldigheid deur middel van eksploratiewe faktorontledings gemaksimeer. In hierdie artikel, sal dit gedoen word deur Tateneni, Mels, Cudeck en Browne (2001 se program vir Omvattende Eksploratiewe Faktorontleding (CEFA te gebruik, wat eksploratiewe faktorontleding aanwend, maar ook die voordele van gewone, bevestigende faktorontleding (bv., die berekening van die standaardfoute van die geroteerde faktorbeladings en indekse van modelpassing bied. Die voordele wat spruit uit die toepassing van hierdie inkrementele benadering word gedemonstreer aan die hand van Allport en Ross (1967 se Religious Orientation Scale, ’n gewilde sielkundige meetintrument.

  13. Procedure for the Selection of Maximum Pipe Thickness for Efficient Thermal Insulation in Piping with Steam Trace

    Directory of Open Access Journals (Sweden)

    Amauris Gilbert-Hernández

    2016-05-01

    Full Text Available A procedure for the selection of maximum pipe thickness to achieve efficient thermal insulation in piping with steam tracing was developed. The bibliographical review allowed identifying the limitations of previous investigations with regard to the selection of pipe thickness in transfer systems with steam tracing. The model for calculating the overall lost heat was prepared. The procedure considers economic criteria for the selection of pipe thickness and established an optimal thickness value which guarantees a total minimum cost by establishing a balance between the expenditures resulting from heat loss and the project costs.

  14. Federal Source Selection Procedures in Competitive Negotiated Acquisitions.

    Science.gov (United States)

    1982-05-01

    officials, see EPSCO , Incorporated, B-183816, November 21, 1975, 55 Coop. Gen. , 75-2 CPD 338, we have recognized that selection officials are not bound...is vested in the "considerable range of judgment and discretion" of the selection officials, EPSCO , Incorporated, supra, who have a. "very broad...Edward E. Davis Contracting, Inc., Comp. Gen. Dec. B-199542, January 13, 1981, 81-1 CPD 120. 97. EPSCO , Incorporated, Comp. Gen. Dec. B-183816

  15. Procedure selection for the flexible adult acquired flatfoot deformity.

    Science.gov (United States)

    Hentges, Matthew J; Moore, Kyle R; Catanzariti, Alan R; Derner, Richard

    2014-07-01

    Adult acquired flatfoot represents a spectrum of deformities affecting the foot and the ankle. The flexible, or nonfixed, deformity must be treated appropriately to decrease the morbidity that accompanies the fixed flatfoot deformity or when deformity occurs in the ankle joint. A comprehensive approach must be taken, including addressing equinus deformity, hindfoot valgus, forefoot supinatus, and medial column instability. A combination of osteotomies, limited arthrodesis, and medial column stabilization procedures are required to completely address the deformity.

  16. 47 CFR 80.103 - Digital selective calling (DSC) operating procedures.

    Science.gov (United States)

    2010-10-01

    ... accordance with 5 U.S.C. 552(a) and 1 CFR part 51. Copies of this standard can be inspected at the Federal... 47 Telecommunication 5 2010-10-01 2010-10-01 false Digital selective calling (DSC) operating... Procedures-General § 80.103 Digital selective calling (DSC) operating procedures. (a) Operating...

  17. 48 CFR 36.301 - Use of two-phase design-build selection procedures.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Use of two-phase design... ACQUISITION REGULATION SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Two-Phase Design-Build Selection Procedures 36.301 Use of two-phase design-build selection procedures....

  18. 23 CFR 636.202 - When are two-phase design-build selection procedures appropriate?

    Science.gov (United States)

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false When are two-phase design-build selection procedures appropriate? 636.202 Section 636.202 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ENGINEERING AND TRAFFIC OPERATIONS DESIGN-BUILD CONTRACTING Selection Procedures, Award Criteria §...

  19. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  20. The Optimal Selection for Restricted Linear Models with Average Estimator

    Directory of Open Access Journals (Sweden)

    Qichang Xie

    2014-01-01

    Full Text Available The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing a k-class generalized information criterion (k-GIC, which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.

  1. Bayesian Evidence and Model Selection

    CERN Document Server

    Knuth, Kevin H; Malakar, Nabin K; Mubeen, Asim M; Placek, Ben

    2014-01-01

    In this paper we review the concept of the Bayesian evidence and its application to model selection. The theory is presented along with a discussion of analytic, approximate and numerical techniques. Application to several practical examples within the context of signal processing are discussed.

  2. A Survey on Procedural Modelling for Virtual Worlds

    NARCIS (Netherlands)

    Smelik, R.M.; Tutenel, T.; Bidarra, R.; Benes, B.

    2014-01-01

    Procedural modelling deals with (semi-)automatic content generation by means of a program or procedure. Among other advantages, its data compression and the potential to generate a large variety of detailed content with reduced human intervention, have made procedural modelling attractive for creati

  3. Selection of Temporal Lags When Modeling Economic and Financial Processes.

    Science.gov (United States)

    Matilla-Garcia, Mariano; Ojeda, Rina B; Marin, Manuel Ruiz

    2016-10-01

    This paper suggests new nonparametric statistical tools and procedures for modeling linear and nonlinear univariate economic and financial processes. In particular, the tools presented help in selecting relevant lags in the model description of a general linear or nonlinear time series; that is, nonlinear models are not a restriction. The tests seem to be robust to the selection of free parameters. We also show that the test can be used as a diagnostic tool for well-defined models.

  4. Information Overload in Multi-Stage Selection Procedures

    NARCIS (Netherlands)

    S.S. Ficco (Stefano); V.A. Karamychev (Vladimir)

    2004-01-01

    textabstractThe paper studies information processing imperfections in a fully rational decision-making network. It is shown that imperfect information transmission and imperfect information acquisition in a multi-stage selection game yield information overload. The paper analyses the mechanisms resp

  5. 24 CFR 984.203 - FSS family selection procedures.

    Science.gov (United States)

    2010-04-01

    ... Action Plan. (c) Motivation as a selection factor—(1) General. a PHA may screen families for interest, and motivation to participate in the FSS program, provided that the factors utilized by the PHA are those which solely measure the family's interest, and motivation to participate in the FSS program....

  6. Information Overload in Multi-Stage Selection Procedures

    NARCIS (Netherlands)

    S.S. Ficco (Stefano); V.A. Karamychev (Vladimir)

    2004-01-01

    textabstractThe paper studies information processing imperfections in a fully rational decision-making network. It is shown that imperfect information transmission and imperfect information acquisition in a multi-stage selection game yield information overload. The paper analyses the mechanisms

  7. Investing in Computer Technology: Criteria and Procedures for System Selection.

    Science.gov (United States)

    Hofstetter, Fred T.

    The criteria used by the University of Delaware in selecting the PLATO computer-based educational system are discussed in this document. Consideration was given to support for instructional strategies, requirements of the student learning station, features for instructors and authors of instructional materials, general operational characteristics,…

  8. Model Selection for Pion Photoproduction

    CERN Document Server

    Landay, J; Fernández-Ramírez, C; Hu, B; Molina, R

    2016-01-01

    Partial-wave analysis of meson and photon-induced reactions is needed to enable the comparison of many theoretical approaches to data. In both energy-dependent and independent parametrizations of partial waves, the selection of the model amplitude is crucial. Principles of the $S$-matrix are implemented to different degree in different approaches, but a many times overlooked aspect concerns the selection of undetermined coefficients and functional forms for fitting, leading to a minimal yet sufficient parametrization. We present an analysis of low-energy neutral pion photoproduction using the Least Absolute Shrinkage and Selection Operator (LASSO) in combination with criteria from information theory and $K$-fold cross validation. These methods are not yet widely known in the analysis of excited hadrons but will become relevant in the era of precision spectroscopy. The principle is first illustrated with synthetic data, then, its feasibility for real data is demonstrated by analyzing the latest available measu...

  9. Bayesian model evidence for order selection and correlation testing.

    Science.gov (United States)

    Johnston, Leigh A; Mareels, Iven M Y; Egan, Gary F

    2011-01-01

    Model selection is a critical component of data analysis procedures, and is particularly difficult for small numbers of observations such as is typical of functional MRI datasets. In this paper we derive two Bayesian evidence-based model selection procedures that exploit the existence of an analytic form for the linear Gaussian model class. Firstly, an evidence information criterion is proposed as a model order selection procedure for auto-regressive models, outperforming the commonly employed Akaike and Bayesian information criteria in simulated data. Secondly, an evidence-based method for testing change in linear correlation between datasets is proposed, which is demonstrated to outperform both the traditional statistical test of the null hypothesis of no correlation change and the likelihood ratio test.

  10. Cognition and procedure representational requirements for predictive human performance models

    Science.gov (United States)

    Corker, K.

    1992-01-01

    Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

  11. Entropic criterion for model selection

    Science.gov (United States)

    Tseng, Chih-Yuan

    2006-10-01

    Model or variable selection is usually achieved through ranking models according to the increasing order of preference. One of methods is applying Kullback-Leibler distance or relative entropy as a selection criterion. Yet that will raise two questions, why use this criterion and are there any other criteria. Besides, conventional approaches require a reference prior, which is usually difficult to get. Following the logic of inductive inference proposed by Caticha [Relative entropy and inductive inference, in: G. Erickson, Y. Zhai (Eds.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP Conference Proceedings, vol. 707, 2004 (available from arXiv.org/abs/physics/0311093)], we show relative entropy to be a unique criterion, which requires no prior information and can be applied to different fields. We examine this criterion by considering a physical problem, simple fluids, and results are promising.

  12. A Selective Review of Group Selection in High Dimensional Models

    CERN Document Server

    Huang, Jian; Ma, Shuangge

    2012-01-01

    Grouping structures arise naturally in many statistical modeling problems. Several methods have been proposed for variable selection that respect grouping structure in variables. Examples include the group LASSO and several concave group selection methods. In this article, we give a selective review of group selection concerning methodological developments, theoretical properties, and computational algorithms. We pay particular attention to group selection methods involving concave penalties. We address both group selection and bi-level selection methods. We describe several applications of these methods in nonparametric additive models, semiparametric regression, seemingly unrelated regressions, genomic data analysis and genome wide association studies. We also highlight some issues that require further study.

  13. Selected soil thermal conductivity models

    Directory of Open Access Journals (Sweden)

    Rerak Monika

    2017-01-01

    Full Text Available The paper presents collected from the literature models of soil thermal conductivity. This is a very important parameter, which allows one to assess how much heat can be transferred from the underground power cables through the soil. The models are presented in table form, thus when the properties of the soil are given, it is possible to select the most accurate method of calculating its thermal conductivity. Precise determination of this parameter results in designing the cable line in such a way that it does not occur the process of cable overheating.

  14. Model selection for Poisson processes with covariates

    CERN Document Server

    Sart, Mathieu

    2011-01-01

    We observe $n$ inhomogeneous Poisson processes with covariates and aim at estimating their intensities. To handle this problem, we assume that the intensity of each Poisson process is of the form $s (\\cdot, x)$ where $x$ is the covariate and where $s$ is an unknown function. We propose a model selection approach where the models are used to approximate the multivariate function $s$. We show that our estimator satisfies an oracle-type inequality under very weak assumptions both on the intensities and the models. By using an Hellinger-type loss, we establish non-asymptotic risk bounds and specify them under various kind of assumptions on the target function $s$ such as being smooth or composite. Besides, we show that our estimation procedure is robust with respect to these assumptions.

  15. Biological chip technology to quickly batch select optimum cryopreservation procedure

    Institute of Scientific and Technical Information of China (English)

    YU Lina; LIU Jing; ZHOU Yixin; HUA Zezhao

    2007-01-01

    In the practices of cryobiology,selection of an optimum freeze/thawing program and an idealistic cryoprotective agent often requires rather tedious,time consuming and repetitive tests.Integrating the functions of sample preparation and viability detection,the concept of biochip technology was introduced to the field of cryopreservation,aiming at quickly finding an optimum freezing and thawing program.Prototype devices were fabricated and corresponding experimental tests were performed.It was shown that microflow-channel chip could not offer a high quality solution distribution.As an alternative,the spot-dropping chip proved to be an excellent way to load the sample quickly and reliably.Infrared thermal mapping on such a chip showed that it had a rather uniform heat transfer boundary.Applying the spot-dropping chip combined with the thermoelectric cooling device,the final output of cryopreservation of multiple samples was tested,and the optimal freeze/thawing program as well as the potentially best concentration of the cryoprotective agent was found by analyzing the results.Further,application of this technique to measure the thermo-physical properties of the cryo-protective agent was also investigated.The study demonstrated that a biochip with integrated automatic loading and inspection units opens the possibility of a massive optimization of the complex cryopreservation program in a quicker and more economical way.

  16. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  17. Modeling and prediction of surgical procedure times

    NARCIS (Netherlands)

    P.S. Stepaniak (Pieter); C. Heij (Christiaan); G. de Vries (Guus)

    2009-01-01

    textabstractAccurate prediction of medical operation times is of crucial importance for cost efficient operation room planning in hospitals. This paper investigates the possible dependence of procedure times on surgeon factors like age, experience, gender, and team composition. The effect of these f

  18. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions,in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform l1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  19. A New Variable Weighting and Selection Procedure for K-Means Cluster Analysis

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    A variance-to-range ratio variable weighting procedure is proposed. We show how this weighting method is theoretically grounded in the inherent variability found in data exhibiting cluster structure. In addition, a variable selection procedure is proposed to operate in conjunction with the variable weighting technique. The performances of these…

  20. Weighted overlap dominance – a procedure for interactive selection on multidimensional interval data

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Nielsen, Kurt

    2011-01-01

    We present an outranking procedure that supports selection of alternatives represented by multiple attributes with interval valued data. The procedure is interactive in the sense that the decision maker directs the search for preferred alternatives by providing weights of the different attributes...

  1. A procedure for Building Product Models

    DEFF Research Database (Denmark)

    Hvam, Lars

    1999-01-01

    , easily adaptable concepts and methods from data modeling (object oriented analysis) and domain modeling (product modeling). The concepts are general and can be used for modeling all types of specifications in the different phases in the product life cycle. The modeling techniques presented have been...

  2. Model selection for pion photoproduction

    Science.gov (United States)

    Landay, J.; Döring, M.; Fernández-Ramírez, C.; Hu, B.; Molina, R.

    2017-01-01

    Partial-wave analysis of meson and photon-induced reactions is needed to enable the comparison of many theoretical approaches to data. In both energy-dependent and independent parametrizations of partial waves, the selection of the model amplitude is crucial. Principles of the S matrix are implemented to a different degree in different approaches; but a many times overlooked aspect concerns the selection of undetermined coefficients and functional forms for fitting, leading to a minimal yet sufficient parametrization. We present an analysis of low-energy neutral pion photoproduction using the least absolute shrinkage and selection operator (LASSO) in combination with criteria from information theory and K -fold cross validation. These methods are not yet widely known in the analysis of excited hadrons but will become relevant in the era of precision spectroscopy. The principle is first illustrated with synthetic data; then, its feasibility for real data is demonstrated by analyzing the latest available measurements of differential cross sections (d σ /d Ω ), photon-beam asymmetries (Σ ), and target asymmetry differential cross sections (d σT/d ≡T d σ /d Ω ) in the low-energy regime.

  3. Improvement and Validation of Weld Residual Stress Modelling Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Weilin; Gunnars, Jens (Inspecta Technology AB, Stockholm (Sweden)); Dong, Pingsha; Hong, Jeong K. (Center for Welded Structures Research, Battelle, Columbus, OH (United States))

    2009-06-15

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  4. Variable Selection for Generalized Varying Coefficient Partially Linear Models with Diverging Number of Parameters

    Institute of Scientific and Technical Information of China (English)

    Zheng-yan Lin; Yu-ze Yuan

    2012-01-01

    Semiparametric models with diverging number of predictors arise in many contemporary scientific areas. Variable selection for these models consists of two components: model selection for non-parametric components and selection of significant variables for the parametric portion.In this paper,we consider a variable selection procedure by combining basis function approximation with SCAD penalty.The proposed procedure simultaneously selects significant variables in the parametric components and the nonparametric components.With appropriate selection of tuning parameters,we establish the consistency and sparseness of this procedure.

  5. Procedures for Geometric Data Reduction in Solid Log Modelling

    Science.gov (United States)

    Luis G. Occeña; Wenzhen Chen; Daniel L. Schmoldt

    1995-01-01

    One of the difficulties in solid log modelling is working with huge data sets, such as those that come from computed axial tomographic imaging. Algorithmic procedures are described in this paper that have successfully reduced data without sacrificing modelling integrity.

  6. CTEPP STANDARD OPERATING PROCEDURE FOR SAMPLE SELECTION (SOP-1.10)

    Science.gov (United States)

    The procedures for selecting CTEPP study subjects are described in the SOP. The primary, county-level stratification is by region and urbanicity. Six sample counties in each of the two states (North Carolina and Ohio) are selected using stratified random sampling and reflect ...

  7. Computation of the Nash Equilibrium Selected by the Tracing Procedure in N-Person Games

    NARCIS (Netherlands)

    Herings, P.J.J.; van den Elzen, A.H.

    1998-01-01

    Harsanyi and Selten (1988) have proposed a theory of equilibrium selection that selects a unique Nash equilibrium for any non-cooperative N-person game. The heart of their theory is given by the tracing procedure, a mathematical construction that adjusts arbitrary prior beliefs into equilibrium beli

  8. Regression model development and computational procedures to support estimation of real-time concentrations and loads of selected constituents in two tributaries to Lake Houston near Houston, Texas, 2005-9

    Science.gov (United States)

    Lee, Michael T.; Asquith, William H.; Oden, Timothy D.

    2012-01-01

    In December 2005, the U.S. Geological Survey (USGS), in cooperation with the City of Houston, Texas, began collecting discrete water-quality samples for nutrients, total organic carbon, bacteria (Escherichia coli and total coliform), atrazine, and suspended sediment at two USGS streamflow-gaging stations that represent watersheds contributing to Lake Houston (08068500 Spring Creek near Spring, Tex., and 08070200 East Fork San Jacinto River near New Caney, Tex.). Data from the discrete water-quality samples collected during 2005–9, in conjunction with continuously monitored real-time data that included streamflow and other physical water-quality properties (specific conductance, pH, water temperature, turbidity, and dissolved oxygen), were used to develop regression models for the estimation of concentrations of water-quality constituents of substantial source watersheds to Lake Houston. The potential explanatory variables included discharge (streamflow), specific conductance, pH, water temperature, turbidity, dissolved oxygen, and time (to account for seasonal variations inherent in some water-quality data). The response variables (the selected constituents) at each site were nitrite plus nitrate nitrogen, total phosphorus, total organic carbon, E. coli, atrazine, and suspended sediment. The explanatory variables provide easily measured quantities to serve as potential surrogate variables to estimate concentrations of the selected constituents through statistical regression. Statistical regression also facilitates accompanying estimates of uncertainty in the form of prediction intervals. Each regression model potentially can be used to estimate concentrations of a given constituent in real time. Among other regression diagnostics, the diagnostics used as indicators of general model reliability and reported herein include the adjusted R-squared, the residual standard error, residual plots, and p-values. Adjusted R-squared values for the Spring Creek models ranged

  9. Variable Selection for Varying-Coefficient Models with Missing Response at Random

    Institute of Scientific and Technical Information of China (English)

    Pei Xin ZHAO; Liu Gen XUE

    2011-01-01

    In this paper, we present a variable selection procedure by combining basis function approximations with penalized estimating equations for varying-coefficient models with missing response at random. With appropriate selection of the tuning parameters, we establish the consistency of the variable selection procedure and the optimal convergence rate of the regularized estimators. A simulation study is undertaken to assess the finite sample performance of the proposed variable selection procedure.

  10. Robust model selection and the statistical classification of languages

    Science.gov (United States)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating

  11. Financial applications of a Tabu search variable selection model

    Directory of Open Access Journals (Sweden)

    Zvi Drezner

    2001-01-01

    Full Text Available We illustrate how a comparatively new technique, a Tabu search variable selection model [Drezner, Marcoulides and Salhi (1999], can be applied efficiently within finance when the researcher must select a subset of variables from among the whole set of explanatory variables under consideration. Several types of problems in finance, including corporate and personal bankruptcy prediction, mortgage and credit scoring, and the selection of variables for the Arbitrage Pricing Model, require the researcher to select a subset of variables from a larger set. In order to demonstrate the usefulness of the Tabu search variable selection model, we: (1 illustrate its efficiency in comparison to the main alternative search procedures, such as stepwise regression and the Maximum R2 procedure, and (2 show how a version of the Tabu search procedure may be implemented when attempting to predict corporate bankruptcy. We accomplish (2 by indicating that a Tabu Search procedure increases the predictability of corporate bankruptcy by up to 10 percentage points in comparison to Altman's (1968 Z-Score model.

  12. Power mos devices: structures and modelling procedures

    Energy Technology Data Exchange (ETDEWEB)

    Rossel, P.; Charitat, G.; Tranduc, H.; Morancho, F.; Moncoqut

    1997-05-01

    In this survey, the historical evolution of power MOS transistor structures is presented and currently used devices are described. General considerations on current and voltage capabilities are discussed and configurations of popular structures are given. A synthesis of different modelling approaches proposed last three years is then presented, including analytical solutions, for basic electrical parameters such as threshold voltage, on-resistance, saturation and quasi-saturation effects, temperature influence and voltage handling capability. The numerical solutions of basic semiconductor devices is then briefly reviewed along with some typical problems which can be solved this way. A compact circuit modelling method is finally explained with emphasis on dynamic behavior modelling

  13. Lessons learned before and after cardiomyoplasty: risk sensitive patient selection and post procedure quality of life.

    Science.gov (United States)

    Furnary, A P; Swanson, J S; Grunkemeier, G; Starr, A

    1996-01-01

    This paper unveils some of the clinical lessons we have learned from caring for cardiomyoplasty patients over the past 7 years. We examine both the clinical and scientific rationale for expanding the time frame of "procedural mortality" from 30 days to 90 days. Utilizing this definition of procedural mortality, preoperative patient variables were applied to postoperative patient outcomes in order to develop a risk sensitive method of patient selection. Preoperative atrial fibrillation, elevated pulmonary capillary wedge pressure, decreased peak oxygen consumption, and the requirement of intra-aortic balloon pump at the time of cardiomyoplasty, were all found to be independent risk factors for early death following cardiomyoplasty. This analysis, which has been previously published, is reviewed and enhanced with the mathematical equations for duplicating these relative risk calculations. The mathematical model presented herein allows a method of risk stratification, which obviates the need for randomized congestive heart failure controls in the future. In the absence of a statistically regulated control population, we also examine the 1-year clinical outcomes of the nonrandomizd control group of patients, who were followed during the North American FDA Phase II Cardiomyoplasty Trial. This quality of life comparison with cardiomyoplasty patients at 1 year revealed a significant decrease in intensive care unit patient-days, a significant increase in activity of daily living score, and a significant improvement in New York Heart Association functional class as compared to control.

  14. Selection of unidimensional scales from a multidimensional item bank in the polytomous Mokken IRT model

    NARCIS (Netherlands)

    Hemker, BT; Sijtsma, Klaas; Molenaar, Ivo W

    1995-01-01

    An automated item selection procedure for selecting unidimensional scales of polytomous items from multidimensional datasets is developed for use in the context of the Mokken item response theory model of monotone homogeneity (Mokken & Lewis, 1982). The selection procedure is directly based on the s

  15. A procedure for Applying a Maturity Model to Process Improvement

    Directory of Open Access Journals (Sweden)

    Elizabeth Pérez Mergarejo

    2014-09-01

    Full Text Available A maturity model is an evolutionary roadmap for implementing the vital practices from one or moredomains of organizational process. The use of the maturity models is poor in the Latin-Americancontext. This paper presents a procedure for applying the Process and Enterprise Maturity Modeldeveloped by Michael Hammer [1]. The procedure is divided into three steps: Preparation, Evaluationand Improvement plan. The Hammer´s maturity model joint to the proposed procedure can be used byorganizations to improve theirs process, involving managers and employees.

  16. Multi-block and path modelling procedures

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2008-01-01

    The author has developed a unified theory of path and multi-block modelling of data. The data blocks are arranged in a directional path. Each data block can lead to one or more data blocks. It is assumed that there is given a collection of input data blocks. Each of them is supposed to describe one...

  17. Inference-based procedural modeling of solids

    KAUST Repository

    Biggers, Keith

    2011-11-01

    As virtual environments become larger and more complex, there is an increasing need for more automated construction algorithms to support the development process. We present an approach for modeling solids by combining prior examples with a simple sketch. Our algorithm uses an inference-based approach to incrementally fit patches together in a consistent fashion to define the boundary of an object. This algorithm samples and extracts surface patches from input models, and develops a Petri net structure that describes the relationship between patches along an imposed parameterization. Then, given a new parameterized line or curve, we use the Petri net to logically fit patches together in a manner consistent with the input model. This allows us to easily construct objects of varying sizes and configurations using arbitrary articulation, repetition, and interchanging of parts. The result of our process is a solid model representation of the constructed object that can be integrated into a simulation-based environment. © 2011 Elsevier Ltd. All rights reserved.

  18. Parametric or nonparametric? A parametricness index for model selection

    CERN Document Server

    Liu, Wei; 10.1214/11-AOS899

    2012-01-01

    In model selection literature, two classes of criteria perform well asymptotically in different situations: Bayesian information criterion (BIC) (as a representative) is consistent in selection when the true model is finite dimensional (parametric scenario); Akaike's information criterion (AIC) performs well in an asymptotic efficiency when the true model is infinite dimensional (nonparametric scenario). But there is little work that addresses if it is possible and how to detect the situation that a specific model selection problem is in. In this work, we differentiate the two scenarios theoretically under some conditions. We develop a measure, parametricness index (PI), to assess whether a model selected by a potentially consistent procedure can be practically treated as the true model, which also hints on AIC or BIC is better suited for the data for the goal of estimating the regression function. A consequence is that by switching between AIC and BIC based on the PI, the resulting regression estimator is si...

  19. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  20. Executive Selection in the European Union: Does the Commission President Investiture Procedure Reduce the Democratic Deficit?

    Directory of Open Access Journals (Sweden)

    Simon Hix

    1997-11-01

    Full Text Available Central to all democratic systems is the ability of citizens to choose who holds executive power. To reduce the democratic-deficit in the EU, therefore, the Maastricht and Amsterdam Treaties give the European Parliament (EP a vote on the European Council nominee for Commission President. The effect, so many commentators claim, is a parliamentary model: where EP elections are connected via an EP majority to executive selection. However, these claims are misplaced. There are no incentives for national parties to compete for the Commission President, and every incentive for MEPs to abide by national-party rather than EP-party wishes. The result is that EP elections are second-order national contests, fought by national parties on national executive performance, and that the winning coalition in the investiture procedure is of prime ministers parties not of EP election victors. Consequently, for a parliamentary model to work, either the EP should go first in the investiture process, or the link between domestic parties and MEPs should be broken. However, if EP elections remain second-order, the only option may be a presidential model, where the Commission President is directly-elected.

  1. Executive Selection in the European Union: Does the Commission President Investiture Procedure Reduce the Democratic Deficit?

    Directory of Open Access Journals (Sweden)

    Simon Hix

    1997-11-01

    Full Text Available Central to all democratic systems is the ability of citizens to choose who holds executive power. To reduce the democratic-deficit in the EU, therefore, the Maastricht and Amsterdam Treaties give the European Parliament (EP a vote on the European Council nominee for Commission President. The effect, so many commentators claim, is a parliamentary model: where EP elections are connected via an EP majority to executive selection. However, these claims are misplaced. There are no incentives for national parties to compete for the Commission President, and every incentive for MEPs to abide by national-party rather than EP-party wishes. The result is that EP elections are ‘second-order national contests’, fought by national parties on national executive performance, and that the winning coalition in the investiture procedure is of ‘prime ministers’ parties’ not of ‘EP election victors’. Consequently, for a parliamentary model to work, either the EP should ‘go first’ in the investiture process, or the link between domestic parties and MEPs should be broken. However, if EP elections remain second-order, the only option may be a presidential model, where the Commission President is directly-elected.

  2. Variable Selection for Semiparametric Varying-Coefficient Partially Linear Models with Missing Response at Random

    Institute of Scientific and Technical Information of China (English)

    Pei Xin ZHAO; Liu Gen XUE

    2011-01-01

    In this paper,we present a variable selection procedure by combining basis function approximations with penalized estimating equations for semiparametric varying-coefficient partially linear models with missing response at random.The proposed procedure simultaneously selects significant variables in parametric components and nonparametric components.With appropriate selection of the tuning parameters,we establish the consistency of the variable selection procedure and the convergence rate of the regularized estimators.A simulation study is undertaken to assess the finite sample performance of the proposed variable selection procedure.

  3. Recruitment and Selection of Foreign Professionals In the South African Job Market: Procedures and Processes

    Directory of Open Access Journals (Sweden)

    Chao Nkhungulu Mulenga

    2007-07-01

    Full Text Available This study investigated procedures and processes used in the selection of prospective foreign applicants by recruitment agencies in South Africa. An electronic survey was distributed to the accessible population of 244 agencies on a national employment website, yielding 57 respondents. The results indicate that the recruitment industry does not have standard, well articulated procedures for identifying and selecting prospective foreign employees and considered processing foreign applicants difficult. Difficulties with the Department of Home Affairs were a major hindrance to recruiting foreign applicants.

  4. Design Transformations for Rule-based Procedural Modeling

    KAUST Repository

    Lienhard, Stefan

    2017-05-24

    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

  5. Empirical aspects about Heckman Procedure Application: Is there sample selection bias in the Brazilian Industry

    Directory of Open Access Journals (Sweden)

    Flávio Kaue Fiuza-Moura

    2015-12-01

    Full Text Available There are several labor market researches whose main goal is to analyze the probability of employment and the structure of wage determination and, for empirical purposes, most of these researches deploy Heckman sample selection bias hazard detection and correction procedure. However, few Brazilian studies are focused in this procedure applicability, especially concerning specific industries. This paper aims to approach these issues by testing the existence of sample selection bias in Brazilian manufacturing industry, and to analyze the impact of the bias correction procedure over the estimated coefficients of OLS Mincer equations. We found sample selection bias hazard only in manufacturing segments which average wages are lower than market average and only in groups of workers which average wage level is below the market average (women, especially blacks. The analysis and comparison of Mincer equations with and without Heckman’s sample selection bias correction procedure brought up that the estimation’s coefficients related to wage differential for male over female workers and the wage differential for urban over non-urban workers tends to be overestimated in cases which the sample selection bias isn’t corrected.

  6. Comparisons of Estimation Procedures for Nonlinear Multilevel Models

    Directory of Open Access Journals (Sweden)

    Ali Reza Fotouhi

    2003-05-01

    Full Text Available We introduce General Multilevel Models and discuss the estimation procedures that may be used to fit multilevel models. We apply the proposed procedures to three-level binary data generated in a simulation study. We compare the procedures by two criteria, Bias and efficiency. We find that the estimates of the fixed effects and variance components are substantially and significantly biased using Longford's Approximation and Goldstein's Generalized Least Squares approaches by two software packages VARCL and ML3. These estimates are not significantly biased and are very close to real values when we use Markov Chain Monte Carlo (MCMC using Gibbs sampling or Nonparametric Maximum Likelihood (NPML approach. The Gaussian Quadrature (GQ approach, even with small number of mass points results in consistent estimates but computationally problematic. We conclude that the MCMC and the NPML approaches are the recommended procedures to fit multilevel models.

  7. Bayesian Constrained-Model Selection for Factor Analytic Modeling

    OpenAIRE

    Peeters, Carel F.W.

    2016-01-01

    My dissertation revolves around Bayesian approaches towards constrained statistical inference in the factor analysis (FA) model. Two interconnected types of restricted-model selection are considered. These types have a natural connection to selection problems in the exploratory FA (EFA) and confirmatory FA (CFA) model and are termed Type I and Type II model selection. Type I constrained-model selection is taken to mean the determination of the appropriate dimensionality of a model. This type ...

  8. Comparing Preference Assessments: Selection- versus Duration-Based Preference Assessment Procedures

    Science.gov (United States)

    Kodak, Tiffany; Fisher, Wayne W.; Kelley, Michael E.; Kisamore, April

    2009-01-01

    In the current investigation, the results of a selection- and a duration-based preference assessment procedure were compared. A Multiple Stimulus With Replacement (MSW) preference assessment [Windsor, J., Piche, L. M., & Locke, P. A. (1994). "Preference testing: A comparison of two presentation methods." "Research in Developmental Disabilities,…

  9. 41 CFR 60-3.6 - Use of selection procedures which have not been validated.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Use of selection procedures which have not been validated. 60-3.6 Section 60-3.6 Public Contracts and Property Management Other Provisions Relating to Public Contracts OFFICE OF FEDERAL CONTRACT COMPLIANCE PROGRAMS,...

  10. 41 CFR 60-3.3 - Discrimination defined: Relationship between use of selection procedures and discrimination.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Discrimination defined: Relationship between use of selection procedures and discrimination. 60-3.3 Section 60-3.3 Public Contracts and Property Management Other Provisions Relating to Public Contracts OFFICE OF FEDERAL CONTRACT...

  11. A Rapid Selection Procedure for Simple Commercial Implementation of omega-Transaminase Reactions

    DEFF Research Database (Denmark)

    Gundersen Deslauriers, Maria; Tufvesson, Pär; Rackham, Emma J.

    2016-01-01

    A stepwise selection procedure is presented to quickly evaluate whether a given omega-transaminase reaction is suitable for a so-called "simple" scale-up for fast industrial implementation. Here "simple" is defined as a system without the need for extensive process development or specialized...

  12. 5 CFR 335.106 - Special selection procedures for certain veterans under merit promotion.

    Science.gov (United States)

    2010-01-01

    ... veterans under merit promotion. 335.106 Section 335.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PROMOTION AND INTERNAL PLACEMENT General Provisions § 335.106 Special selection procedures for certain veterans under merit promotion. Preference eligibles or veterans who...

  13. 48 CFR 570.105-2 - Two-phase design-build selection procedures.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Two-phase design-build... ADMINISTRATION SPECIAL CONTRACTING PROGRAMS ACQUIRING LEASEHOLD INTERESTS IN REAL PROPERTY General 570.105-2 Two..., you must use the two-phase design-build selection procedures in section 303M of the Federal Property...

  14. Procedural advice on self-assessment and task selection in learner-controlled education

    NARCIS (Netherlands)

    Taminiau, Bettine; Kester, Liesbeth; Corbalan, Gemma; Van Merriënboer, Jeroen; Kirschner, Paul A.

    2010-01-01

    Taminiau, E. M. C., Kester, L., Corbalan, G., Van Merriënboer, J. J. G., & Kirschner, P. A. (2010, July). Procedural advice on self-assessment and task selection in learner-controlled education. Paper presented at the Junior Researchers of EARLI Conference 2010, Frankfurt, Germany.

  15. Procedural advice on self-assessment and task selection in learner-controlled education

    NARCIS (Netherlands)

    Taminiau, Bettine; Corbalan, Gemma; Kester, Liesbeth; Van Merriënboer, Jeroen; Kirschner, Paul A.

    2011-01-01

    Taminiau, E. M. C., Corbalan, G., Kester, L., Van Merriënboer, J. J. G., & Kirschner, P. A. (2010, March). Procedural advice on self-assessment and task selection in learner-controlled education. Presentation at the ICO Springschool, Niederalteich, Germany.

  16. 48 CFR 570.305 - Two-phase design-build selection procedures.

    Science.gov (United States)

    2010-10-01

    ... proposals and their relative importance. (3) The maximum number of offerors to be selected to submit... importance. (c) The following procedures apply to phase-one evaluation factors: (1) Phase one factors include... performance of the offeror's team (including architect-engineer and construction members of the team)....

  17. Comparing the Performance of Five Multidimensional CAT Selection Procedures with Different Stopping Rules

    Science.gov (United States)

    Yao, Lihua

    2013-01-01

    Through simulated data, five multidimensional computerized adaptive testing (MCAT) selection procedures with varying test lengths are examined and compared using different stopping rules. Fixed item exposure rates are used for all the items, and the Priority Index (PI) method is used for the content constraints. Two stopping rules, standard error…

  18. 34 CFR 654.41 - What are the selection criteria and procedures?

    Science.gov (United States)

    2010-07-01

    ...) Whether the secondary school each scholar attends is within or outside the scholar's State of legal... 34 Education 3 2010-07-01 2010-07-01 false What are the selection criteria and procedures? 654.41 Section 654.41 Education Regulations of the Offices of the Department of Education (Continued) OFFICE...

  19. modelling room cooling capacity with fuzzy logic procedure

    African Journals Online (AJOL)

    user

    Modelling with fuzzy logic is an approach to forming ... the way humans think and make judgments [10]. ... artificial intelligence and expert systems [17, 18] to .... from selected cases, human professional computation and the Model predictions.

  20. Dresden Faculty selection procedure for medical students: what impact does it have, what is the outcome?

    Science.gov (United States)

    Hänsel, Mike; Klupp, S; Graupner, Anke; Dieter, Peter; Koch, Thea

    2010-01-01

    Since 2004 German universities have been able to use a selection procedure to admit up to 60 percent of new students. In 2005, the Carl Gustav Carus Faculty of Medicine at Dresden introduced a new admission procedure. In order to take account of cognitive as well as non-cognitive competencies the Faculty used the following selection criteria based on the legal regulations for university-admissions:the grade point average of the school-leaving exam (SSC, Abitur), marks in relevant school subjects; profession and work experience; premedical education; and a structured interview. In order to evaluate the effects of the Faculty admission procedures applied in the years 2005, 2006 and 2007, the results on the First National Medical Examination (FNME) were compared between the candidates selected by the Faculty procedures (CSF-group) and the group of candidates admitted by the Central Office for the Allocation of Places in Higher Education (the ZVS group, comprising the subgroups: ZVS best, ZVS rest and ZVS total). The rates of participation in the FNME within the required minimum time of 2 years of medical studies were higher in the CSF group compared to the ZVS-total group. The FNME pass rates were lowest in the ZVS rest group and highest in the ZVS best group. The ZVS best group and the ZVS total group showed the best FMNE results, whereas the results of the CSF-group were equal or worse compared to the ZVS rest group. No correlation was found between the interview results and the FNME results. According to studies of the prognostic value of various selection instruments, the school leaving grade point average seems the best predictor of success on the FNME. In order to validate the non-cognitive selection instruments of the Faculty procedure, complementary instruments are needed to measure non-cognitive aspects that are not captured by the FNME-results.

  1. Dresden Faculty selection procedure for medical students: what impact does it have, what is the outcome? [

    Directory of Open Access Journals (Sweden)

    Koch, Thea

    2010-04-01

    Full Text Available [english] Since 2004 German universities have been able to use a selection procedure to admit up to 60 percent of new students. In 2005, the Carl Gustav Carus Faculty of Medicine at Dresden introduced a new admission procedure. In order to take account of cognitive as well as non-cognitive competencies the Faculty used the following selection criteria based on the legal regulations for university-admissions:In order to evaluate the effects of the Faculty admission procedures applied in the years 2005, 2006 and 2007, the results on the First National Medical Examination (FNME were compared between the candidates selected by the Faculty procedures (CSF-group and the group of candidates admitted by the Central Office for the Allocation of Places in Higher Education (the ZVS group, comprising the subgroups: ZVS best, ZVS rest and ZVS total. The rates of participation in the FNME within the required minimum time of 2 years of medical studies were higher in the CSF group compared to the ZVS-total group. The FNME pass rates were lowest in the ZVS rest group and highest in the ZVS best group. The ZVS best group and the ZVS total group showed the best FMNE results, whereas the results of the CSF-group were equal or worse compared to the ZVS rest group. No correlation was found between the interview results and the FNME results. According to studies of the prognostic value of various selection instruments, the school leaving grade point average seems the best predictor of success on the FNME. In order to validate the non-cognitive selection instruments of the Faculty procedure, complementary instruments are needed to measure non-cognitive aspects that are not captured by the FNME-results.

  2. Model selection bias and Freedman's paradox

    Science.gov (United States)

    Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.

    2010-01-01

    In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.

  3. TIME SERIES FORECASTING WITH MULTIPLE CANDIDATE MODELS: SELECTING OR COMBINING?

    Institute of Scientific and Technical Information of China (English)

    YU Lean; WANG Shouyang; K. K. Lai; Y.Nakamori

    2005-01-01

    Various mathematical models have been commonly used in time series analysis and forecasting. In these processes, academic researchers and business practitioners often come up against two important problems. One is whether to select an appropriate modeling approach for prediction purposes or to combine these different individual approaches into a single forecast for the different/dissimilar modeling approaches. Another is whether to select the best candidate model for forecasting or to mix the various candidate models with different parameters into a new forecast for the same/similar modeling approaches. In this study, we propose a set of computational procedures to solve the above two issues via two judgmental criteria. Meanwhile, in view of the problems presented in the literature, a novel modeling technique is also proposed to overcome the drawbacks of existing combined forecasting methods. To verify the efficiency and reliability of the proposed procedure and modeling technique, the simulations and real data examples are conducted in this study.The results obtained reveal that the proposed procedure and modeling technique can be used as a feasible solution for time series forecasting with multiple candidate models.

  4. A new selective enrichment procedure for isolating Pasteurella multocida from avian and environmental samples

    Science.gov (United States)

    Moore, M.K.; Cicnjak-Chubbs, L.; Gates, R.J.

    1994-01-01

    A selective enrichment procedure, using two new selective media, was developed to isolate Pasteurella multocida from wild birds and environmental samples. These media were developed by testing 15 selective agents with six isolates of P. multocida from wild avian origin and seven other bacteria representing genera frequently found in environmental and avian samples. The resulting media—Pasteurella multocida selective enrichment broth and Pasteurella multocida selective agar—consisted of a blood agar medium at pH 10 containing gentamicin, potassium tellurite, and amphotericin B. Media were tested to determine: 1) selectivity when attempting isolation from pond water and avian carcasses, 2) sensitivity for detection of low numbers of P. multocida from pure and mixed cultures, 3) host range specificity of the media, and 4) performance compared with standard blood agar. With the new selective enrichment procedure, P. multocida was isolated from inoculated (60 organisms/ml) pond water 84% of the time, whereas when standard blood agar was used, the recovery rate was 0%.

  5. Analysis of selection procedures to determine priority areas for payment for water ecosystem services programs

    Directory of Open Access Journals (Sweden)

    Ana Feital Gjorup

    2016-03-01

    Full Text Available The approach of ecosystem services has shown promise for the evaluation of interactions between ecosystems and society, integrating environmental and socioeconomic concepts which require interdisciplinary knowledge. However, its usefulness in decision making is limited due to information gaps. This study was therefore developed in order to contribute to the application of principles of ecosystem services in the decision-making for water resources management. It aims to identify procedures and methodologies used for decision-making in order to select priority areas to be included in projects or compensation programs for environmental services. To do so, we searched technical and scientific literature describing methods and experiences used to select priority areas. Key steps in the process of selecting priority areas were identified; then a survey was conducted of the procedures adopted for each key step considering the literature selected; and, finally, the information collected was analyzed and classified. Considering the study’s sample, we noted that the selection of priority areas was based on the direct use of predetermined criteria. The use of indicators and spatial analyses are practices still scarcely employed. We must highlight, however, that most of the analyzed documents did not aim to describe the process of selecting priority areas in detail, which may have resulted in some omissions. Although these conditions may limit the analysis in this study, the results presented here allow us to identify the main objectives, actions and criteria used to select priority areas for programs or compensation projects for environmental services.

  6. Generic Graph Grammar: A Simple Grammar for Generic Procedural Modelling

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Bærentzen, Jakob Andreas

    2012-01-01

    in a directed cyclic graph. Furthermore, the basic productions are chosen such that Generic Graph Grammar seamlessly combines the capabilities of L-systems to imitate biological growth (to model trees, animals, etc.) and those of split grammars to design structured objects (chairs, houses, etc.). This results......Methods for procedural modelling tend to be designed either for organic objects, which are described well by skeletal structures, or for man-made objects, which are described well by surface primitives. Procedural methods, which allow for modelling of both kinds of objects, are few and usually...

  7. A Stepwise Time Series Regression Procedure for Water Demand Model Identification

    Science.gov (United States)

    Miaou, Shaw-Pin

    1990-09-01

    Annual time series water demand has traditionally been studied through multiple linear regression analysis. Four associated model specification problems have long been recognized: (1) the length of the available time series data is relatively short, (2) a large set of candidate explanatory or "input" variables needs to be considered, (3) input variables can be highly correlated with each other (multicollinearity problem), and (4) model error series are often highly autocorrelated or even nonstationary. A step wise time series regression identification procedure is proposed to alleviate these problems. The proposed procedure adopts the sequential input variable selection concept of stepwise regression and the "three-step" time series model building strategy of Box and Jenkins. Autocorrelated model error is assumed to follow an autoregressive integrated moving average (ARIMA) process. The stepwise selection procedure begins with a univariate time series demand model with no input variables. Subsequently, input variables are selected and inserted into the equation one at a time until the last entered variable is found to be statistically insignificant. The order of insertion is determined by a statistical measure called between-variable partial correlation. This correlation measure is free from the contamination of serial autocorrelation. Three data sets from previous studies are employed to illustrate the proposed procedure. The results are then compared with those from their original studies.

  8. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  9. MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS

    Energy Technology Data Exchange (ETDEWEB)

    Asensio Ramos, A.; Manso Sainz, R.; Martinez Gonzalez, M. J.; Socas-Navarro, H. [Instituto de Astrofisica de Canarias, E-38205, La Laguna, Tenerife (Spain); Viticchie, B. [ESA/ESTEC RSSD, Keplerlaan 1, 2200 AG Noordwijk (Netherlands); Orozco Suarez, D., E-mail: aasensio@iac.es [National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588 (Japan)

    2012-04-01

    Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.

  10. Improvement of procedures for evaluating photochemical models. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Tesche, T.W.; Lurmann, F.R.; Roth, P.M.; Georgopoulos, P.; Seinfeld, J.H.

    1990-08-01

    The study establishes a set of procedures that should be used by all groups evaluating the performance of a photochemical model application. A set of ten numerical measures are recommended for evaluating a photochemical model's accuracy in predicting ozone concentrations. Nine graphical methods and six investigative simulations are also recommended to give additional insight into model performance. Standards are presented that each modeling study should try to meet. To complement the operational model evaluation procedures, several diagnostic procedures are suggested. The sensitivity of the model to uncertainties in hydrocarbon emission rates and speciation, and other parameters should be assessed. Uncertainty bounds of key input variables and parameters can be propagated through the model to provide estimated uncertainties in the ozone predictions. Comparisons between measurements and predictions of species other than ozone will help ensure that the model is predicting the right ozone for the right reasons. Plotting concentrations residuals (differences) against a variety of variables may give insight into the reasons for poor model performance. Mass flux and balance calculations can identify the relative importance of emissions and transport. The study also identifies testing a model's response to emission changes as the most important research need. Another important area is testing the emissions inventory.

  11. Landing Procedure in Model Ditching Tests of Bf 109

    Science.gov (United States)

    Sottorf, W.

    1949-01-01

    The purpose of the model tests is to clarify the motions in the alighting on water of a land plane. After discussion of the model laws, the test method and test procedure are described. The deceleration-time-diagrams of the landing of a model of the Bf 109 show a high deceleration peek of greater than 20g which can be lowered to 4 to 6g by radiator cowling and brake skid.

  12. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built...

  13. Procedural Skills Education – Colonoscopy as a Model

    Directory of Open Access Journals (Sweden)

    Maitreyi Raman

    2008-01-01

    Full Text Available Traditionally, surgical and procedural apprenticeship has been an assumed activity of students, without a formal educational context. With increasing barriers to patient and operating room access such as shorter work week hours for residents, and operating room and endoscopy time at a premium, alternate strategies to maximizing procedural skill development are being considered. Recently, the traditional surgical apprenticeship model has been challenged, with greater emphasis on the need for surgical and procedural skills training to be more transparent and for alternatives to patient-based training to be considered. Colonoscopy performance is a complex psychomotor skill requiring practioners to integrate multiple sensory inputs, and involves higher cortical centres for optimal performance. Colonoscopy skills involve mastery in the cognitive, technical and process domains. In the present review, we propose a model for teaching colonoscopy to the novice trainee based on educational theory.

  14. Predictive market segmentation model: An application of logistic regression model and CHAID procedure

    Directory of Open Access Journals (Sweden)

    Soldić-Aleksić Jasna

    2009-01-01

    Full Text Available Market segmentation presents one of the key concepts of the modern marketing. The main goal of market segmentation is focused on creating groups (segments of customers that have similar characteristics, needs, wishes and/or similar behavior regarding the purchase of concrete product/service. Companies can create specific marketing plan for each of these segments and therefore gain short or long term competitive advantage on the market. Depending on the concrete marketing goal, different segmentation schemes and techniques may be applied. This paper presents a predictive market segmentation model based on the application of logistic regression model and CHAID analysis. The logistic regression model was used for the purpose of variables selection (from the initial pool of eleven variables which are statistically significant for explaining the dependent variable. Selected variables were afterwards included in the CHAID procedure that generated the predictive market segmentation model. The model results are presented on the concrete empirical example in the following form: summary model results, CHAID tree, Gain chart, Index chart, risk and classification tables.

  15. Exploratory Bayesian model selection for serial genetics data.

    Science.gov (United States)

    Zhao, Jing X; Foulkes, Andrea S; George, Edward I

    2005-06-01

    Characterizing the process by which molecular and cellular level changes occur over time will have broad implications for clinical decision making and help further our knowledge of disease etiology across many complex diseases. However, this presents an analytic challenge due to the large number of potentially relevant biomarkers and the complex, uncharacterized relationships among them. We propose an exploratory Bayesian model selection procedure that searches for model simplicity through independence testing of multiple discrete biomarkers measured over time. Bayes factor calculations are used to identify and compare models that are best supported by the data. For large model spaces, i.e., a large number of multi-leveled biomarkers, we propose a Markov chain Monte Carlo (MCMC) stochastic search algorithm for finding promising models. We apply our procedure to explore the extent to which HIV-1 genetic changes occur independently over time.

  16. Model selection for amplitude analysis

    CERN Document Server

    Guegan, Baptiste; Stevens, Justin; Williams, Mike

    2015-01-01

    Model complexity in amplitude analyses is often a priori under-constrained since the underlying theory permits a large number of amplitudes to contribute to most physical processes. The use of an overly complex model results in reduced predictive power and worse resolution on unknown parameters of interest. Therefore, it is common to reduce the complexity by removing from consideration some subset of the allowed amplitudes. This paper studies a data-driven method for limiting model complexity through regularization during regression in the context of a multivariate (Dalitz-plot) analysis. The regularization technique applied greatly improves the performance. A method is also proposed for obtaining the significance of a resonance in a multivariate amplitude analysis.

  17. Vector operations for modelling data-conversion procedures

    Energy Technology Data Exchange (ETDEWEB)

    Rivkin, M.N.

    1992-03-01

    This article presents a set of vector operations that permit effective modelling of operations from extended relational algebra for implementations of variable-construction procedures in data-conversion processors. Vector operations are classified, and test results are given for the ARIUS UP and other popular database management systems for PC`s. 10 refs., 5 figs.

  18. Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…

  19. Markov chain decision model for urinary incontinence procedures.

    Science.gov (United States)

    Kumar, Sameer; Ghildayal, Nidhi; Ghildayal, Neha

    2017-03-13

    Purpose Urinary incontinence (UI) is a common chronic health condition, a problem specifically among elderly women that impacts quality of life negatively. However, UI is usually viewed as likely result of old age, and as such is generally not evaluated or even managed appropriately. Many treatments are available to manage incontinence, such as bladder training and numerous surgical procedures such as Burch colposuspension and Sling for UI which have high success rates. The purpose of this paper is to analyze which of these popular surgical procedures for UI is effective. Design/methodology/approach This research employs randomized, prospective studies to obtain robust cost and utility data used in the Markov chain decision model for examining which of these surgical interventions is more effective in treating women with stress UI based on two measures: number of quality adjusted life years (QALY) and cost per QALY. Treeage Pro Healthcare software was employed in Markov decision analysis. Findings Results showed the Sling procedure is a more effective surgical intervention than the Burch. However, if a utility greater than certain utility value, for which both procedures are equally effective, is assigned to persistent incontinence, the Burch procedure is more effective than the Sling procedure. Originality/value This paper demonstrates the efficacy of a Markov chain decision modeling approach to study the comparative effectiveness analysis of available treatments for patients with UI, an important public health issue, widely prevalent among elderly women in developed and developing countries. This research also improves upon other analyses using a Markov chain decision modeling process to analyze various strategies for treating UI.

  20. The Ouroboros Model, selected facets.

    Science.gov (United States)

    Thomsen, Knud

    2011-01-01

    The Ouroboros Model features a biologically inspired cognitive architecture. At its core lies a self-referential recursive process with alternating phases of data acquisition and evaluation. Memory entries are organized in schemata. The activation at a time of part of a schema biases the whole structure and, in particular, missing features, thus triggering expectations. An iterative recursive monitor process termed 'consumption analysis' is then checking how well such expectations fit with successive activations. Mismatches between anticipations based on previous experience and actual current data are highlighted and used for controlling the allocation of attention. A measure for the goodness of fit provides feedback as (self-) monitoring signal. The basic algorithm works for goal directed movements and memory search as well as during abstract reasoning. It is sketched how the Ouroboros Model can shed light on characteristics of human behavior including attention, emotions, priming, masking, learning, sleep and consciousness.

  1. Random Effect and Latent Variable Model Selection

    CERN Document Server

    Dunson, David B

    2008-01-01

    Presents various methods for accommodating model uncertainty in random effects and latent variable models. This book focuses on frequentist likelihood ratio and score tests for zero variance components. It also focuses on Bayesian methods for random effects selection in linear mixed effects and generalized linear mixed models

  2. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, M.; Baker, N.A.; Duguid, J.O. [INTERA, Inc., Las Vegas, NV (United States)

    1994-04-04

    Since the 1960`s, ground-water flow models have been used for analysis of water resources problems. In the 1970`s, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970`s and well into the 1980`s focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M&O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing.

  3. Resampling procedures to validate dendro-auxometric regression models

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Regression analysis has a large use in several sectors of forest research. The validation of a dendro-auxometric model is a basic step in the building of the model itself. The more a model resists to attempts of demonstrating its groundlessness, the more its reliability increases. In the last decades many new theories, that quite utilizes the calculation speed of the calculators, have been formulated. Here we show the results obtained by the application of a bootsprap resampling procedure as a validation tool.

  4. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Johanna H Oxstrand; Katya L Le Blanc

    2012-07-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

  5. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  6. Empirical Likelihood Based Variable Selection for Varying Coefficient Partially Linear Models with Censored Data

    Institute of Scientific and Technical Information of China (English)

    Peixin ZHAO

    2013-01-01

    In this paper,we consider the variable selection for the parametric components of varying coefficient partially linear models with censored data.By constructing a penalized auxiliary vector ingeniously,we propose an empirical likelihood based variable selection procedure,and show that it is consistent and satisfies the sparsity.The simulation studies show that the proposed variable selection method is workable.

  7. Generalizability of a composite student selection procedure at a university-based chiropractic program

    DEFF Research Database (Denmark)

    O'Neill, Lotte D; Korsholm, Lars; Wallstedt, Birgitta;

    2009-01-01

    generalizability of composites of non-cognitive admission variables in admission to health science programs. The aim of this study was to estimate the generalizability of a composite selection to a chiropractic program, consisting of: application form information, a written motivational essay, a common knowledge...... generalizability was found for the common knowledge test (G=1.00) and the admission interview (G=0.88). Good generalizability was found for application form information (G=0.75) and moderate generalizability (G=0.50) for the written motivation essay. The generalizability of the final composite admission procedure...... for future research are discussed....

  8. Performance on Selected Candidate Screening Test Procedures Before and After Army Basic and Advanced Individual Training

    Science.gov (United States)

    1985-06-01

    CLASSIFICATION SYSTEM MOS Cluster Lifting Capacity Aerobic Capacity ALPHA >40 kg >2.25 1/min BRAVO >40 kg 1.5 -2.25 1/min CHARLIE >40 kg ə.50 1/min...between tests. Isometric Handgrip Strength (HG) The handgrip apparatus and procedure was that of Knapik and Ramos (17). This test was selected because it...endurance training programs. In: Exercise and Sport Science Reviews. J.H.Wilmore (Ed). Academic Press, NY, NY. V1:15 188, 1973. 17. Ramos ,M.U., J.J.Knapik

  9. Linear regression model selection using p-values when the model dimension grows

    CERN Document Server

    Pokarowski, Piotr; Teisseyre, Paweł

    2012-01-01

    We consider a new criterion-based approach to model selection in linear regression. Properties of selection criteria based on p-values of a likelihood ratio statistic are studied for families of linear regression models. We prove that such procedures are consistent i.e. the minimal true model is chosen with probability tending to 1 even when the number of models under consideration slowly increases with a sample size. The simulation study indicates that introduced methods perform promisingly when compared with Akaike and Bayesian Information Criteria.

  10. Technical Description of the Use of Selective Perfusion Techniques During the Norwood Procedure for Hypoplastic Left Heart Syndrome

    Science.gov (United States)

    Chabot, David Leonard; Polimenakos, Anastasios C.

    2011-01-01

    Abstract: Since the introduction of the Norwood procedure for surgical palliation of hypoplastic left heart syndrome in 1983, refinements have been made to the original procedure to improve patient outcomes while still accomplishing the original goals of the procedure. One of these refinements has been the introduction of regional selective perfusion to limit the duration of circulatory arrest times and optimize the regional flow distribution. In this paper we describe our technique for performing selective cerebral and lower body perfusion during the Norwood procedure. PMID:22416608

  11. Modeling a radiotherapy clinical procedure: total body irradiation.

    Science.gov (United States)

    Esteban, Ernesto P; García, Camille; De La Rosa, Verónica

    2010-09-01

    Leukemia, non-Hodgkin's lymphoma, and neuroblastoma patients prior to bone marrow transplants may be subject to a clinical radiotherapy procedure called total body irradiation (TBI). To mimic a TBI procedure, we modified the Jones model of bone marrow radiation cell kinetics by adding mutant and cancerous cell compartments. The modified Jones model is mathematically described by a set of n + 4 differential equations, where n is the number of mutations before a normal cell becomes a cancerous cell. Assuming a standard TBI radiotherapy treatment with a total dose of 1320 cGy fractionated over four days, two cases were considered. In the first, repopulation and sub-lethal repair in the different cell populations were not taken into account (model I). In this case, the proposed modified Jones model could be solved in a closed form. In the second, repopulation and sub-lethal repair were considered, and thus, we found that the modified Jones model could only be solved numerically (model II). After a numerical and graphical analysis, we concluded that the expected results of TBI treatment can be mimicked using model I. Model II can also be used, provided the cancer repopulation factor is less than the normal cell repopulation factor. However, model I has fewer free parameters compared to model II. In either case, our results are in agreement that the standard dose fractionated over four days, with two irradiations each day, provides the needed conditioning treatment prior to bone marrow transplant. Partial support for this research was supplied by the NIH-RISE program, the LSAMP-Puerto Rico program, and the University of Puerto Rico-Humacao.

  12. Sensitivity analysis of standardization procedures in drought indices to varied input data selections

    Science.gov (United States)

    Liu, Yi; Ren, Liliang; Hong, Yang; Zhu, Ye; Yang, Xiaoli; Yuan, Fei; Jiang, Shanhu

    2016-07-01

    Reasonable input data selection is of great significance for accurate computation of drought indices. In this study, a comprehensive comparison is conducted on the sensitivity of two commonly used standardization procedures (SP) in drought indices to datasets, namely the probability distribution based SP and the self-calibrating Palmer SP. The standardized Palmer drought index (SPDI) and the self-calibrating Palmer drought severity index (SC-PDSI) are selected as representatives of the two SPs, respectively. Using meteorological observations (1961-2012) in the Yellow River basin, 23 sub-datasets with a length of 30 years are firstly generated with the moving window method. Then we use the whole time series and 23 sub-datasets to compute two indices separately, and compare their spatiotemporal differences, as well as performances in capturing drought areas. Finally, a systematic investigation in term of changing climatic conditions and varied parameters in each SP is conducted. Results show that SPDI is less sensitive to data selection than SC-PDSI. SPDI series derived from different datasets are highly correlated, and consistent in drought area characterization. Sensitivity analysis shows that among the three parameters in the generalized extreme value (GEV) distribution, SPDI is most sensitive to changes in the scale parameter, followed by location and shape parameters. For SC-PDSI, its inconsistent behaviors among different datasets are primarily induced by the self-calibrated duration factors (p and q). In addition, it is found that the introduction of the self-calibrating procedure for duration factors further aggravates the dependence of drought index on input datasets compared with original empirical algorithm that Palmer uses, making SC-PDSI more sensitive to variations in data sample. This study clearly demonstrate the impacts of dataset selection on sensitivity of drought index computation, which has significant implications for proper usage of drought

  13. Mathematical Model for the Selection of Processing Parameters in Selective Laser Sintering of Polymer Products

    Directory of Open Access Journals (Sweden)

    Ana Pilipović

    2014-03-01

    Full Text Available Additive manufacturing (AM is increasingly applied in the development projects from the initial idea to the finished product. The reasons are multiple, but what should be emphasised is the possibility of relatively rapid manufacturing of the products of complicated geometry based on the computer 3D model of the product. There are numerous limitations primarily in the number of available materials and their properties, which may be quite different from the properties of the material of the finished product. Therefore, it is necessary to know the properties of the product materials. In AM procedures the mechanical properties of materials are affected by the manufacturing procedure and the production parameters. During SLS procedures it is possible to adjust various manufacturing parameters which are used to influence the improvement of various mechanical and other properties of the products. The paper sets a new mathematical model to determine the influence of individual manufacturing parameters on the polymer product made by selective laser sintering. Old mathematical model is checked by statistical method with central composite plan and it is established that old mathematical model must be expanded with new parameter beam overlay ratio. Verification of new mathematical model and optimization of the processing parameters are made on SLS machine.

  14. Recreation of architectural structures using procedural modeling based on volumes

    Directory of Open Access Journals (Sweden)

    Santiago Barroso Juan

    2013-11-01

    Full Text Available While the procedural modeling of buildings and other architectural structures has evolved very significantly in recent years, there is noticeable absence of high-level tools that allow a designer, an artist or an historian, creating important buildings or architectonic structures in a particular city. In this paper we present a tool for creating buildings in a simple and clear, following rules that use the language and methodology of creating their own buildings, and hiding the user the algorithmic details of the creation of the model.

  15. Melody Track Selection Using Discriminative Language Model

    Science.gov (United States)

    Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong

    In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.

  16. Procedural Modeling for Rapid-Prototyping of Multiple Building Phases

    Science.gov (United States)

    Saldana, M.; Johanson, C.

    2013-02-01

    RomeLab is a multidisciplinary working group at UCLA that uses the city of Rome as a laboratory for the exploration of research approaches and dissemination practices centered on the intersection of space and time in antiquity. In this paper we present a multiplatform workflow for the rapid-prototyping of historical cityscapes through the use of geographic information systems, procedural modeling, and interactive game development. Our workflow begins by aggregating archaeological data in a GIS database. Next, 3D building models are generated from the ArcMap shapefiles in Esri CityEngine using procedural modeling techniques. A GIS-based terrain model is also adjusted in CityEngine to fit the building elevations. Finally, the terrain and city models are combined in Unity, a game engine which we used to produce web-based interactive environments which are linked to the GIS data using keyhole markup language (KML). The goal of our workflow is to demonstrate that knowledge generated within a first-person virtual world experience can inform the evaluation of data derived from textual and archaeological sources, and vice versa.

  17. Left atrial appendage closure: patient, device and post-procedure drug selection.

    Science.gov (United States)

    Tzikas, Apostolos; Bergmann, Martin W

    2016-05-17

    Left atrial appendage closure (LAAC), a device-based therapy for stroke prevention in patients with atrial fibrillation, is considered an alternative to oral anticoagulation therapy, particularly for patients at high risk of bleeding. Proof of concept has been demonstrated by the PROTECT AF and PREVAIL trials which evaluated the WATCHMAN device (Boston Scientific, Marlborough, MA, USA) versus warfarin, showing favourable outcome for the device group. The most commonly used devices for LAAC are the WATCHMAN and its successor, the WATCHMAN FLX (Boston Scientific) and the AMPLATZER Cardiac Plug and more recently the AMPLATZER Amulet device (both St. Jude Medical, St. Paul, MN, USA). The procedure is typically performed via a transseptal puncture under fluoroscopic and echocardiographic guidance. Technically, it is considered quite demanding due to the anatomic variability and fragility of the appendage. Careful material manipulation, adequate operator training, and good cardiac imaging and device sizing allow a safe, uneventful procedure. Post-procedure antithrombotic drug selection is based on the patient's history, indication and quality of LAAC.

  18. Biotechnological procedures to select white rot fungi for the degradation of PAHs.

    Science.gov (United States)

    Lee, Hwanhwi; Jang, Yeongseon; Choi, Yong-Seok; Kim, Min-Ji; Lee, Jaejung; Lee, Hanbyul; Hong, Joo-Hyun; Lee, Young Min; Kim, Gyu-Hyeok; Kim, Jae-Jin

    2014-02-01

    White rot fungi are essential in forest ecology and are deeply involved in wood decomposition and the biodegradation of various xenobiotics. The fungal ligninolytic enzymes involved in these processes have recently become the focus of much attention for their possible biotechnological applications. Successful bioremediation requires the selection of species with desirable characteristics. In this study, 150 taxonomically and physiologically diverse white rot fungi, including 55 species, were investigated for their performance in a variety of biotechnological procedures, such as dye decolorization, gallic acid reaction, ligninolytic enzymes, and tolerance to four PAHs, phenanthrene, anthracene, fluoranthene, and pyrene. Among these fungi, six isolates showed the highest (>90%) tolerance to both individual PAH and mixed PAHs. And six isolates oxidized gallic acid with dark brown color and they rapidly decolorized RBBR within ten days. These fungi revealed various profiles when evaluated for their biotechnological performance to compare the capability of degradation of PAHs between two groups selected. As the results demonstrated the six best species selected from gallic acid more greatly degraded four PAHs than the other isolates selected via tolerance test. It provided that gallic acid reaction test can be performed to rank the fungi by their ability to degrade the PAHs. Most of all, Peniophora incarnata KUC8836 and Phlebia brevispora KUC9033 significantly degraded the four PAHs and can be considered prime candidates for the degradation of xenobiotic compounds in environmental settings.

  19. Preoperative testing and risk assessment: perspectives on patient selection in ambulatory anesthetic procedures

    Directory of Open Access Journals (Sweden)

    Stierer TL

    2015-08-01

    Full Text Available Tracey L Stierer,1,2 Nancy A Collop3,41Department of Anesthesiology, 2Department of Critical Care Medicine, Otolaryngology Head and Neck Surgery, Johns Hopkins Medicine, Baltimore, MD, USA; 3Department of Medicine, 4Department of Neurology, Emory University, Emory Sleep Center, Wesley Woods Center, Atlanta, GA, USAAbstract: With recent advances in surgical and anesthetic technique, there has been a growing emphasis on the delivery of care to patients undergoing ambulatory procedures of increasing complexity. Appropriate patient selection and meticulous preparation are vital to the provision of a safe, quality perioperative experience. It is not unusual for patients with complex medical histories and substantial systemic disease to be scheduled for discharge on the same day as their surgical procedure. The trend to “push the envelope” by triaging progressively sicker patients to ambulatory surgical facilities has resulted in a number of challenges for the anesthesia provider who will assume their care. It is well known that certain patient diseases are associated with increased perioperative risk. It is therefore important to define clinical factors that warrant more extensive testing of the patient and medical conditions that present a prohibitive risk for an adverse outcome. The preoperative assessment is an opportunity for the anesthesia provider to determine the status and stability of the patient’s health, provide preoperative education and instructions, and offer support and reassurance to the patient and the patient’s family members. Communication between the surgeon/proceduralist and the anesthesia provider is critical in achieving optimal outcome. A multifaceted approach is required when considering whether a specific patient will be best served having their procedure on an outpatient basis. Not only should the patient's comorbidities be stable and optimized, but details regarding the planned procedure and the resources available

  20. Expert System Model for Educational Personnel Selection

    Directory of Open Access Journals (Sweden)

    Héctor A. Tabares-Ospina

    2013-06-01

    Full Text Available The staff selection is a difficult task due to the subjectivity that the evaluation means. This process can be complemented using a system to support decision. This paper presents the implementation of an expert system to systematize the selection process of professors. The management of software development is divided into 4 parts: requirements, design, implementation and commissioning. The proposed system models a specific knowledge through relationships between variables evidence and objective.

  1. Bayesian variable selection for latent class models.

    Science.gov (United States)

    Ghosh, Joyee; Herring, Amy H; Siega-Riz, Anna Maria

    2011-09-01

    In this article, we develop a latent class model with class probabilities that depend on subject-specific covariates. One of our major goals is to identify important predictors of latent classes. We consider methodology that allows estimation of latent classes while allowing for variable selection uncertainty. We propose a Bayesian variable selection approach and implement a stochastic search Gibbs sampler for posterior computation to obtain model-averaged estimates of quantities of interest such as marginal inclusion probabilities of predictors. Our methods are illustrated through simulation studies and application to data on weight gain during pregnancy, where it is of interest to identify important predictors of latent weight gain classes.

  2. Parent-Implemented Procedural Modification of Escape Extinction in the Treatment of Food Selectivity in a Young Child with Autism

    Science.gov (United States)

    Tarbox, Jonathan; Schiff, Averil; Najdowski, Adel C.

    2010-01-01

    Fool selectivity is characterized by the consumption of an inadequate variety of foods. The effectiveness of behavioral treatment procedures, particularly nonremoval of the spoon, is well validated by research. The role of parents in the treatment of feeding disorders and the feasibility of behavioral procedures for parent implementation in the…

  3. [Indications of lung transplantation: Patients selection, timing of listing, and choice of procedure].

    Science.gov (United States)

    Morisse Pradier, H; Sénéchal, A; Philit, F; Tronc, F; Maury, J-M; Grima, R; Flamens, C; Paulus, S; Neidecker, J; Mornex, J-F

    2016-02-01

    Lung transplantation (LT) is now considered as an excellent treatment option for selected patients with end-stage pulmonary diseases, such as COPD, cystic fibrosis, idiopathic pulmonary fibrosis, and pulmonary arterial hypertension. The 2 goals of LT are to provide a survival benefit and to improve quality of life. The 3-step decision process leading to LT is discussed in this review. The first step is the selection of candidates, which requires a careful examination in order to check absolute and relative contraindications. The second step is the timing of listing for LT; it requires the knowledge of disease-specific prognostic factors available in international guidelines, and discussed in this paper. The third step is the choice of procedure: indications of heart-lung, single-lung, and bilateral-lung transplantation are described. In conclusion, this document provides guidelines to help pulmonologists in the referral and selection processes of candidates for transplantation in order to optimize the outcome of LT. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  4. a Procedural Solution to Model Roman Masonry Structures

    Science.gov (United States)

    Cappellini, V.; Saleri, R.; Stefani, C.; Nony, N.; De Luca, L.

    2013-07-01

    The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac - PAM, developed by IGN (Paris). We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France). Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick), this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects) and metric measures for restoration purposes.

  5. Adverse selection model regarding tobacco consumption

    Directory of Open Access Journals (Sweden)

    Dumitru MARIN

    2006-01-01

    Full Text Available The impact of introducing a tax on tobacco consumption can be studied trough an adverse selection model. The objective of the model presented in the following is to characterize the optimal contractual relationship between the governmental authorities and the two type employees: smokers and non-smokers, taking into account that the consumers’ decision to smoke or not represents an element of risk and uncertainty. Two scenarios are run using the General Algebraic Modeling Systems software: one without taxes set on tobacco consumption and another one with taxes set on tobacco consumption, based on an adverse selection model described previously. The results of the two scenarios are compared in the end of the paper: the wage earnings levels and the social welfare in case of a smoking agent and in case of a non-smoking agent.

  6. A Theoretical Model for Selective Exposure Research.

    Science.gov (United States)

    Roloff, Michael E.; Noland, Mark

    This study tests the basic assumptions underlying Fishbein's Model of Attitudes by correlating an individual's selective exposure to types of television programs (situation comedies, family drama, and action/adventure) with the attitudinal similarity between individual attitudes and attitudes characterized on the programs. Twenty-three college…

  7. Analysis of boutique arrays: a universal method for the selection of the optimal data normalization procedure.

    Science.gov (United States)

    Uszczyńska, Barbara; Zyprych-Walczak, Joanna; Handschuh, Luiza; Szabelska, Alicja; Kaźmierczak, Maciej; Woronowicz, Wiesława; Kozłowski, Piotr; Sikorski, Michał M; Komarnicki, Mieczysław; Siatkowski, Idzi; Figlerowicz, Marek

    2013-09-01

    DNA microarrays, which are among the most popular genomic tools, are widely applied in biology and medicine. Boutique arrays, which are small, spotted, dedicated microarrays, constitute an inexpensive alternative to whole-genome screening methods. The data extracted from each microarray-based experiment must be transformed and processed prior to further analysis to eliminate any technical bias. The normalization of the data is the most crucial step of microarray data pre-processing and this process must be carefully considered as it has a profound effect on the results of the analysis. Several normalization algorithms have been developed and implemented in data analysis software packages. However, most of these methods were designed for whole-genome analysis. In this study, we tested 13 normalization strategies (ten for double-channel data and three for single-channel data) available on R Bioconductor and compared their effectiveness in the normalization of four boutique array datasets. The results revealed that boutique arrays can be successfully normalized using standard methods, but not every method is suitable for each dataset. We also suggest a universal seven-step workflow that can be applied for the selection of the optimal normalization procedure for any boutique array dataset. The described workflow enables the evaluation of the investigated normalization methods based on the bias and variance values for the control probes, a differential expression analysis and a receiver operating characteristic curve analysis. The analysis of each component results in a separate ranking of the normalization methods. A combination of the ranks obtained from all the normalization procedures facilitates the selection of the most appropriate normalization method for the studied dataset and determines which methods can be used interchangeably.

  8. Autoregressive model selection with simultaneous sparse coefficient estimation

    CERN Document Server

    Sang, Hailin

    2011-01-01

    In this paper we propose a sparse coefficient estimation procedure for autoregressive (AR) models based on penalized conditional maximum likelihood. The penalized conditional maximum likelihood estimator (PCMLE) thus developed has the advantage of performing simultaneous coefficient estimation and model selection. Mild conditions are given on the penalty function and the innovation process, under which the PCMLE satisfies a strong consistency, local $N^{-1/2}$ consistency, and oracle property, respectively, where N is sample size. Two penalty functions, least absolute shrinkage and selection operator (LASSO) and smoothly clipped average deviation (SCAD), are considered as examples, and SCAD is shown to have better performances than LASSO. A simulation study confirms our theoretical results. At the end, we provide an application of our method to a historical price data of the US Industrial Production Index for consumer goods, and the result is very promising.

  9. Lumping procedure for a kinetic model of catalytic naphtha reforming

    Directory of Open Access Journals (Sweden)

    H. M. Arani

    2009-12-01

    Full Text Available A lumping procedure is developed for obtaining kinetic and thermodynamic parameters of catalytic naphtha reforming. All kinetic and deactivation parameters are estimated from industrial data and thermodynamic parameters are calculated from derived mathematical expressions. The proposed model contains 17 lumps that include the C6 to C8+ hydrocarbon range and 15 reaction pathways. Hougen-Watson Langmuir-Hinshelwood type reaction rate expressions are used for kinetic simulation of catalytic reactions. The kinetic parameters are benchmarked with several sets of plant data and estimated by the SQP optimization method. After calculation of deactivation and kinetic parameters, plant data are compared with model predictions and only minor deviations between experimental and calculated data are generally observed.

  10. Procedures and Methods of Digital Modeling in Representation Didactics

    Science.gov (United States)

    La Mantia, M.

    2011-09-01

    At the Bachelor degree course in Engineering/Architecture of the University "La Sapienza" of Rome, the courses of Design and Survey, in addition to considering the learning of methods of representation, the application of descriptive geometry and survey, in order to expand the vision and spatial conception of the student, pay particular attention to the use of information technology for the preparation of design and survey drawings, achieving their goals through an educational path of "learning techniques, procedures and methods of modeling architectural structures." The fields of application involved two different educational areas: the analysis and that of survey, both from the acquisition of the given metric (design or survey) to the development of three-dimensional virtual model.

  11. VARIABLE SELECTION BY PSEUDO WAVELETS IN HETEROSCEDASTIC REGRESSION MODELS INVOLVING TIME SERIES

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A simple but efficient method has been proposed to select variables in heteroscedastic regression models. It is shown that the pseudo empirical wavelet coefficients corresponding to the significant explanatory variables in the regression models are clearly larger than those nonsignificant ones, on the basis of which a procedure is developed to select variables in regression models. The coefficients of the models are also estimated. All estimators are proved to be consistent.

  12. Learning curve estimation in medical devices and procedures: hierarchical modeling.

    Science.gov (United States)

    Govindarajulu, Usha S; Stillo, Marco; Goldfarb, David; Matheny, Michael E; Resnic, Frederic S

    2017-07-30

    In the use of medical device procedures, learning effects have been shown to be a critical component of medical device safety surveillance. To support their estimation of these effects, we evaluated multiple methods for modeling these rates within a complex simulated dataset representing patients treated by physicians clustered within institutions. We employed unique modeling for the learning curves to incorporate the learning hierarchy between institution and physicians and then modeled them within established methods that work with hierarchical data such as generalized estimating equations (GEE) and generalized linear mixed effect models. We found that both methods performed well, but that the GEE may have some advantages over the generalized linear mixed effect models for ease of modeling and a substantially lower rate of model convergence failures. We then focused more on using GEE and performed a separate simulation to vary the shape of the learning curve as well as employed various smoothing methods to the plots. We concluded that while both hierarchical methods can be used with our mathematical modeling of the learning curve, the GEE tended to perform better across multiple simulated scenarios in order to accurately model the learning effect as a function of physician and hospital hierarchical data in the use of a novel medical device. We found that the choice of shape used to produce the 'learning-free' dataset would be dataset specific, while the choice of smoothing method was negligibly different from one another. This was an important application to understand how best to fit this unique learning curve function for hierarchical physician and hospital data. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Predictive models of procedural human supervisory control behavior

    Science.gov (United States)

    Boussemart, Yves

    Human supervisory control systems are characterized by the computer-mediated nature of the interactions between one or more operators and a given task. Nuclear power plants, air traffic management and unmanned vehicles operations are examples of such systems. In this context, the role of the operators is typically highly proceduralized due to the time and mission-critical nature of the tasks. Therefore, the ability to continuously monitor operator behavior so as to detect and predict anomalous situations is a critical safeguard for proper system operation. In particular, such models can help support the decision J]l8king process of a supervisor of a team of operators by providing alerts when likely anomalous behaviors are detected By exploiting the operator behavioral patterns which are typically reinforced through standard operating procedures, this thesis proposes a methodology that uses statistical learning techniques in order to detect and predict anomalous operator conditions. More specifically, the proposed methodology relies on hidden Markov models (HMMs) and hidden semi-Markov models (HSMMs) to generate predictive models of unmanned vehicle systems operators. Through the exploration of the resulting HMMs in two distinct single operator scenarios, the methodology presented in this thesis is validated and shown to provide models capable of reliably predicting operator behavior. In addition, the use of HSMMs on the same data scenarios provides the temporal component of the predictions missing from the HMMs. The final step of this work is to examine how the proposed methodology scales to more complex scenarios involving teams of operators. Adopting a holistic team modeling approach, both HMMs and HSMMs are learned based on two team-based data sets. The results show that the HSMMs can provide valuable timing information in the single operator case, whereas HMMs tend to be more robust to increased team complexity. In addition, this thesis discusses the

  14. Model selection for radiochromic film dosimetry

    CERN Document Server

    Méndez, Ignasi

    2015-01-01

    The purpose of this study was to find the most accurate model for radiochromic film dosimetry by comparing different channel independent perturbation models. A model selection approach based on (algorithmic) information theory was followed, and the results were validated using gamma-index analysis on a set of benchmark test cases. Several questions were addressed: (a) whether incorporating the information of the non-irradiated film, by scanning prior to irradiation, improves the results; (b) whether lateral corrections are necessary when using multichannel models; (c) whether multichannel dosimetry produces better results than single-channel dosimetry; (d) which multichannel perturbation model provides more accurate film doses. It was found that scanning prior to irradiation and applying lateral corrections improved the accuracy of the results. For some perturbation models, increasing the number of color channels did not result in more accurate film doses. Employing Truncated Normal perturbations was found to...

  15. A competency based selection procedure for Dutch postgraduate GP training: a pilot study on validity and reliability

    NARCIS (Netherlands)

    Vermeulen, M.I.; Tromp, F.; Zuithoff, N.P.; Pieters, R.H.; Damoiseaux, R.A.; Kuyvenhoven, M.M.

    2014-01-01

    Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the compete

  16. A competency based selection procedure for Dutch postgraduate GP training: a pilot study on validity and reliability

    NARCIS (Netherlands)

    Vermeulen, M.I.; Tromp, F.; Zuithoff, N.P.; Pieters, R.H.; Damoiseaux, R.A.; Kuyvenhoven, M.M.

    2014-01-01

    Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the

  17. 23 CFR 636.203 - What are the elements of two-phase selection procedures for competitive proposals?

    Science.gov (United States)

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false What are the elements of two-phase selection procedures for competitive proposals? 636.203 Section 636.203 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ENGINEERING AND TRAFFIC OPERATIONS DESIGN-BUILD CONTRACTING Selection...

  18. Portfolio Selection Model with Derivative Securities

    Institute of Scientific and Technical Information of China (English)

    王春峰; 杨建林; 蒋祥林

    2003-01-01

    Traditional portfolio theory assumes that the return rate of portfolio follows normality. However, this assumption is not true when derivative assets are incorporated. In this paper a portfolio selection model is developed based on utility function which can capture asymmetries in random variable distributions. Other realistic conditions are also considered, such as liabilities and integer decision variables. Since the resulting model is a complex mixed-integer nonlinear programming problem, simulated annealing algorithm is applied for its solution. A numerical example is given and sensitivity analysis is conducted for the model.

  19. A simple procedure for selection and sizing of indirect passive solar heating systems

    Energy Technology Data Exchange (ETDEWEB)

    Bansal, N.K. (Indian Inst. of Tech., New Delhi (India). Centre of Energy Studies); Thomas, P.C. (Tata Energy Research Inst., Bombay (India))

    1991-01-01

    Equivalent solar-air temperatures have been defined for four indirect gain passive solar heating concepts, namely, mass wall, water wall, Trombe wall and solarium. Steady state thermal efficiencies have also been defined as a measure of the ability of each system to deliver heat into the living space. Design curves have been developed which relate the average instantaneous solar radiation incident on the passive element to thermal efficiency for different values of ambient temperature. These curves are useful in selection of an appropriate passive heating concept for a particular location. It is inferred that a solarium is most effective at very low levels of incident radiation and low ambient temperature. Water walls and Trombe walls are most efficient at higher levels of incident radiation. A simple procedure has been developed for a first approximation of sizing the selected system using these design curves and a minimum of meteorological information, namely, monthly average of daily global solar radiation, monthly average maximum and minimum ambient temperatures. (author).

  20. A P-value model for theoretical power analysis and its applications in multiple testing procedures

    Directory of Open Access Journals (Sweden)

    Fengqing Zhang

    2016-10-01

    Full Text Available Abstract Background Power analysis is a critical aspect of the design of experiments to detect an effect of a given size. When multiple hypotheses are tested simultaneously, multiplicity adjustments to p-values should be taken into account in power analysis. There are a limited number of studies on power analysis in multiple testing procedures. For some methods, the theoretical analysis is difficult and extensive numerical simulations are often needed, while other methods oversimplify the information under the alternative hypothesis. To this end, this paper aims to develop a new statistical model for power analysis in multiple testing procedures. Methods We propose a step-function-based p-value model under the alternative hypothesis, which is simple enough to perform power analysis without simulations, but not too simple to lose the information from the alternative hypothesis. The first step is to transform distributions of different test statistics (e.g., t, chi-square or F to distributions of corresponding p-values. We then use a step function to approximate each of the p-value’s distributions by matching the mean and variance. Lastly, the step-function-based p-value model can be used for theoretical power analysis. Results The proposed model is applied to problems in multiple testing procedures. We first show how the most powerful critical constants can be chosen using the step-function-based p-value model. Our model is then applied to the field of multiple testing procedures to explain the assumption of monotonicity of the critical constants. Lastly, we apply our model to a behavioral weight loss and maintenance study to select the optimal critical constants. Conclusions The proposed model is easy to implement and preserves the information from the alternative hypothesis.

  1. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  2. On Model Selection Criteria in Multimodel Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ye, Ming; Meyer, Philip D.; Neuman, Shlomo P.

    2008-03-21

    Hydrologic systems are open and complex, rendering them prone to multiple conceptualizations and mathematical descriptions. There has been a growing tendency to postulate several alternative hydrologic models for a site and use model selection criteria to (a) rank these models, (b) eliminate some of them and/or (c) weigh and average predictions and statistics generated by multiple models. This has led to some debate among hydrogeologists about the merits and demerits of common model selection (also known as model discrimination or information) criteria such as AIC [Akaike, 1974], AICc [Hurvich and Tsai, 1989], BIC [Schwartz, 1978] and KIC [Kashyap, 1982] and some lack of clarity about the proper interpretation and mathematical representation of each criterion. In particular, whereas we [Neuman, 2003; Ye et al., 2004, 2005; Meyer et al., 2007] have based our approach to multimodel hydrologic ranking and inference on the Bayesian criterion KIC (which reduces asymptotically to BIC), Poeter and Anderson [2005] and Poeter and Hill [2007] have voiced a preference for the information-theoretic criterion AICc (which reduces asymptotically to AIC). Their preference stems in part from a perception that KIC and BIC require a "true" or "quasi-true" model to be in the set of alternatives while AIC and AICc are free of such an unreasonable requirement. We examine the model selection literature to find that (a) all published rigorous derivations of AIC and AICc require that the (true) model having generated the observational data be in the set of candidate models; (b) though BIC and KIC were originally derived by assuming that such a model is in the set, BIC has been rederived by Cavanaugh and Neath [1999] without the need for such an assumption; (c) KIC reduces to BIC as the number of observations becomes large relative to the number of adjustable model parameters, implying that it likewise does not require the existence of a true model in the set of alternatives; (d) if a true

  3. A Neurodynamical Model for Selective Visual Attention

    Institute of Scientific and Technical Information of China (English)

    QU Jing-Yi; WANG Ru-Bin; ZHANG Yuan; DU Ying

    2011-01-01

    A neurodynamical model for selective visual attention considering orientation preference is proposed. Since orientation preference is one of the most important properties of neurons in the primary visual cortex, it should be fully considered besides external stimuli intensity. By tuning the parameter of orientation preference, the regimes of synchronous dynamics associated with the development of the attention focus are studied. The attention focus is represented by those peripheral neurons that generate spikes synchronously with the central neuron while the activity of other peripheral neurons is suppressed. Such dynamics correspond to the partial synchronization mode. Simulation results show that the model can sequentially select objects with different orientation preferences and has a reliable shift of attention from one object to another, which are consistent with the experimental results that neurons with different orientation preferences are laid out in pinwheel patterns.%A neurodynamical model for selective visual attention considering orientation preference is proposed.Since orientation preference is one of the most important properties of neurons in the primary visual cortex,it should be fully considered besides external stimuli intensity.By tuning the parameter of orientation preference,the regimes of synchronous dynamics associated with the development of the attention focus are studied.The attention focus is represented by those peripheral neurons that generate spikes synchronously with the central neuron while the activity of other peripheral neurons is suppressed.Such dynamics correspond to the partial synchronization mode.Simulation results show that the model can sequentially select objects with different orientation preferences and has a reliable shift of attention from one object to another,which are consistent with the experimental results that neurons with different orientation preferences are laid out in pinwheel patterns.Selective visual

  4. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: 'Are we actually dealing with a convolutive mixture?'. We try to answer this question for EEG data....

  5. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  6. Skewed factor models using selection mechanisms

    KAUST Repository

    Kim, Hyoung-Moon

    2015-12-21

    Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.

  7. Behavioral optimization models for multicriteria portfolio selection

    Directory of Open Access Journals (Sweden)

    Mehlawat Mukesh Kumar

    2013-01-01

    Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.

  8. Multi-dimensional model order selection

    Directory of Open Access Journals (Sweden)

    Roemer Florian

    2011-01-01

    Full Text Available Abstract Multi-dimensional model order selection (MOS techniques achieve an improved accuracy, reliability, and robustness, since they consider all dimensions jointly during the estimation of parameters. Additionally, from fundamental identifiability results of multi-dimensional decompositions, it is known that the number of main components can be larger when compared to matrix-based decompositions. In this article, we show how to use tensor calculus to extend matrix-based MOS schemes and we also present our proposed multi-dimensional model order selection scheme based on the closed-form PARAFAC algorithm, which is only applicable to multi-dimensional data. In general, as shown by means of simulations, the Probability of correct Detection (PoD of our proposed multi-dimensional MOS schemes is much better than the PoD of matrix-based schemes.

  9. Model selection and comparison for independents sinusoids

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2014-01-01

    this method by considering the problem in a full Bayesian framework instead of the approximate formulation, on which the asymptotic MAP criterion is based. This leads to a new model selection and comparison method, the lp-BIC, whose computational complexity is of the same order as the asymptotic MAP criterion......In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve....... Through simulations, we demonstrate that the lp-BIC outperforms the asymptotic MAP criterion and other state of the art methods in terms of model selection, de-noising and prediction performance. The simulation code is available online....

  10. Developing Physiologic Models for Emergency Medical Procedures Under Microgravity

    Science.gov (United States)

    Parker, Nigel; O'Quinn, Veronica

    2012-01-01

    Several technological enhancements have been made to METI's commercial Emergency Care Simulator (ECS) with regard to how microgravity affects human physiology. The ECS uses both a software-only lung simulation, and an integrated mannequin lung that uses a physical lung bag for creating chest excursions, and a digital simulation of lung mechanics and gas exchange. METI s patient simulators incorporate models of human physiology that simulate lung and chest wall mechanics, as well as pulmonary gas exchange. Microgravity affects how O2 and CO2 are exchanged in the lungs. Procedures were also developed to take into affect the Glasgow Coma Scale for determining levels of consciousness by varying the ECS eye-blinking function to partially indicate the level of consciousness of the patient. In addition, the ECS was modified to provide various levels of pulses from weak and thready to hyper-dynamic to assist in assessing patient conditions from the femoral, carotid, brachial, and pedal pulse locations.

  11. Tracking Models for Optioned Portfolio Selection

    Science.gov (United States)

    Liang, Jianfeng

    In this paper we study a target tracking problem for the portfolio selection involving options. In particular, the portfolio in question contains a stock index and some European style options on the index. A refined tracking-error-variance methodology is adopted to formulate this problem as a multi-stage optimization model. We derive the optimal solutions based on stochastic programming and optimality conditions. Attention is paid to the structure of the optimal payoff function, which is shown to possess rich properties.

  12. New insights in portfolio selection modeling

    OpenAIRE

    Zareei, Abalfazl

    2016-01-01

    Recent advancements in the field of network theory commence a new line of developments in portfolio selection techniques that stands on the ground of perceiving financial market as a network with assets as nodes and links accounting for various types of relationships among financial assets. In the first chapter, we model the shock propagation mechanism among assets via network theory and provide an approach to construct well-diversified portfolios that are resilient to shock propagation and c...

  13. Experiences with a procedure for modeling product knowledge

    DEFF Research Database (Denmark)

    Hansen, Benjamin Loer; Hvam, Lars

    2002-01-01

    This paper presents experiences with a procedure for building configurators. The procedure has been used in an American company producing custom-made precision air conditioning equipment. The paper describes experiences with the use of the procedure and experiences with the project in general....

  14. Model Selection in Data Analysis Competitions

    DEFF Research Database (Denmark)

    Wind, David Kofoed; Winther, Ole

    2014-01-01

    The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platform...... Kaggle. In this paper, we will state and try to verify a set of qualitative hypotheses about predictive modelling, both in general and in the scope of data analysis competitions. To verify our hypotheses we will look at previous competitions and their outcomes, use qualitative interviews with top...

  15. Selection of procedures for inservice inspections; Auswahl der Verfahren fuer wiederkehrende Pruefungen

    Energy Technology Data Exchange (ETDEWEB)

    Brast, G. [Preussische Elektrizitaets-AG (Preussenelektra), Hannover (Germany); Britz, A. [Bayernwerk AG, Muenchen (Germany); Maier, H.J. [Stuttgart Univ. (Germany). Staatliche Materialpruefungsanstalt; Seidenkranz, T. [TUEV Energie- und Systemtechnik GmbH, Mannheim (Germany)

    1998-11-01

    At present, selection of procedures for inservice inspection has to take into account the legal basis, i.e. the existing regulatory codes, and the practical aspects, i.e. experience and information obtained by the general, initial inservice inspection or performance data obtained by the latest, recurrent inspection. However, regulatory codes are being reviewed to a certain extent in order to permit integration of technological progress. Depending on the degree of availability in future, of inspection task-specific, sensitive and qualified NDE techniques for inservice inspections (`risk based ISI`), the framework of defined inspection intervals, sites, and detection limits will be broken up and altered in response to progress made. This opens up new opportunities for an optimization of inservice inspections for proof of component integrity. (orig./CB) [Deutsch] Zur Zeit muss sich die Auswahl der Pruefverfahren an den gueltigen Regelwerken und, da es sich um wiederkehrende Pruefungen handelt, an der Basispruefung bzw. der letzten wiederkehrenden Pruefung orientieren. Jedoch vollzieht sich zur Zeit eine Oeffnung der Regelwerke, mit der man auch der Weiterentwicklung der Prueftechniken Rechnung traegt. In dem Masse, wie zukuenftig auf die Pruefaufgabe/Pruefaussage optimal abgestimmte und qualifizierte Prueftechniken mit einer hohen Nachweisempfindlichkeit am Bauteil fuer zielgerichtete wiederkehrende Pruefungen (als `risk based ISI`) zur Verfuegung stehen, wird der Rahmen mit festgelegten Pruefintervallen, Prueforten und festen Registriergrenzen gesprengt und variabel gestaltet werden koennen. Damit ergeben sich neue Moeglichkeiten fuer eine Optimierung der WKP zum Nachweis der Integritaet des Bauteils. (orig./MM)

  16. New masking procedure for selective complexometric determination of copper(II).

    Science.gov (United States)

    Singh, R P

    1972-11-01

    A study has been made of a new masking procedure for highly selective complexometric determination of copper(II), based on decomposition of the copper-EDTA complex at pH 5-6. Among the various combinations of masking agents tried, ternary masking mixtures comprising a main complexing agent (thiourea), a reducing agent (ascorbic acid) and an auxiliary complexing agent (thiosemicarbazide or a small amount of 1,10-phenanthroline or 2,2'-dipyridyl) have been found most suitable. An excess of EDTA is added and the surplus EDTA is back-titrated with lead (or zinc) nitrate with Xylenol Orange as indicator (pH 5-6). A masking mixture is then added to decompose the copper-EDTA complex and the liberated EDTA is again back-titrated with lead (or zinc) nitrate. The following cations do not interfere: Ag(+), Hg(2+), Pb(2+), Ni(2+), Bi(3+), As(3+), Al(3+), Sb(3+), Sn(4+), Cd(2+), Co(2+), Cr(3+) and moderate amounts of Fe(3+) and Mn(2+). The notable feature is that consecutive determination of Hg(2+) and Cu(2+) can be conveniently carried out in the presence of other cations.

  17. Procedures for selecting and buying district heating equipment. Sofia district heating. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-11-01

    The aim of this Final Report, prepared for the project `Procedures for Selecting and Buying DistRict Heating Equipment - Sofia District Heating Company`, is to establish an overview of the activities accomplished, the outputs delivered and the general experience gained as a result of the project. The main objective of the project is to enable Sofia District Heating Company to prepare specifications and tender documents, identify possible suppliers, evaluate offers, etc. in connection with purchase of district heating equipment. This objective has been reached by using rehabilitation of sub-stations as an example requested by Sofia DH. The project was originally planned to be finalized end of 1995, but due to the extensions of the scope of work, the project has been prolonged until end 1997. The following main activities were accomplished: Preparation of a detailed work plan; Collection of background information; Discussion and advice about technical specifications and tender documents for sub-station rehabilitation; Input to terms of reference for a master plan study; Input to technical specification for heat meters; Collection of ideas for topics and examples related to dissemination of information to consumers about matters related to district heating consumption. (EG)

  18. HIGH QUALITY ENVIRONMENTAL PRINCIPLES APPLIED TO THE ARCHITECTONIC DESIGN SELECTION PROCEDURE: THE NUTRE LAB CASE

    Directory of Open Access Journals (Sweden)

    Claudia Barroso Krause

    2012-06-01

    Full Text Available The need to produce more sustainable buildings has been influencing the design decisions all over the world. That’s why it is imperative, in Brazil, the development of strategies and method to aid the decision making during the design process, focused on high quality environmental. This paper presents a decision support tool based on the principles of sustainable construction developed by the Project, Architecture and Sustainability Research Group (GPAS of Federal University of Rio de Janeiro – Brazil. The methodology has been developed for the selection of a preliminary design of a laboratory to be built at Rio Technology Park at the University campus. The support provided by GPAS occurred in three stages: the elaboration of the Reference Guide for the competitors, the development of a methodology to evaluate the proposed solutions (based on environmental performance criteria and the assistance of the members of jury in the trial phase. The theoretical framework was based upon the concepts of the bioclimatic architecture, the procedures specified by the certification HQE® (Haute Qualité Environnementale and the method suggested by the ADDENDA® architecture office. The success of this experience points out the possibility to future application in similar cases.

  19. Inflation model selection meets dark radiation

    Science.gov (United States)

    Tram, Thomas; Vallance, Robert; Vennin, Vincent

    2017-01-01

    We investigate how inflation model selection is affected by the presence of additional free-streaming relativistic degrees of freedom, i.e. dark radiation. We perform a full Bayesian analysis of both inflation parameters and cosmological parameters taking reheating into account self-consistently. We compute the Bayesian evidence for a few representative inflation scenarios in both the standard ΛCDM model and an extension including dark radiation parametrised by its effective number of relativistic species Neff. Using a minimal dataset (Planck low-l polarisation, temperature power spectrum and lensing reconstruction), we find that the observational status of most inflationary models is unchanged. The exceptions are potentials such as power-law inflation that predict large values for the scalar spectral index that can only be realised when Neff is allowed to vary. Adding baryon acoustic oscillations data and the B-mode data from BICEP2/Keck makes power-law inflation disfavoured, while adding local measurements of the Hubble constant H0 makes power-law inflation slightly favoured compared to the best single-field plateau potentials. This illustrates how the dark radiation solution to the H0 tension would have deep consequences for inflation model selection.

  20. Information technology - Telecommunications and information exchange between systems - High-level data link control procedures - Description of the X.25 LAPB-compatible DTE data link procedures; Amendment 1: Modulo 32768 and multi-selective reject option

    CERN Document Server

    International Organization for Standardization. Geneva

    1996-01-01

    Information technology - Telecommunications and information exchange between systems - High-level data link control procedures - Description of the X.25 LAPB-compatible DTE data link procedures; Amendment 1: Modulo 32768 and multi-selective reject option

  1. Modified uterine allotransplantation and immunosuppression procedure in the sheep model.

    Directory of Open Access Journals (Sweden)

    Li Wei

    Full Text Available OBJECTIVE: To develop an orthotopic, allogeneic, uterine transplantation technique and an effective immunosuppressive protocol in the sheep model. METHODS: In this pilot study, 10 sexually mature ewes were subjected to laparotomy and total abdominal hysterectomy with oophorectomy to procure uterus allografts. The cold ischemic time was 60 min. End-to-end vascular anastomosis was performed using continuous, non-interlocking sutures. Complete tissue reperfusion was achieved in all animals within 30 s after the vascular re-anastomosis, without any evidence of arterial or venous thrombosis. The immunosuppressive protocol consisted of tacrolimus, mycophenolate mofetil and methylprednisolone tablets. Graft viability was assessed by transrectal ultrasonography and second-look laparotomy at 2 and 4 weeks, respectively. RESULTS: Viable uterine tissue and vascular patency were observed on transrectal ultrasonography and second-look laparotomy. Histological analysis of the graft tissue (performed in one ewe revealed normal tissue architecture with a very subtle inflammatory reaction but no edema or stasis. CONCLUSION: We have developed a modified procedure that allowed us to successfully perform orthotopic, allogeneic, uterine transplantation in sheep, whose uterine and vascular anatomy (apart from the bicornuate uterus is similar to the human anatomy, making the ovine model excellent for human uterine transplant research.

  2. Radiographic implications of procedures involving cardiac implantable electronic devices (CIEDs – Selected aspects

    Directory of Open Access Journals (Sweden)

    Roman Steckiewicz

    2017-06-01

    Full Text Available Background: Some cardiac implantable electronic device (CIED implantation procedures require the use of X-rays, which is reflected by such parameters as total fluoroscopy time (TFT and dose-area product (DAP – defined as the absorbed dose multiplied by the area irradiated. Material and Methods: This retrospective study evaluated 522 CIED implantation (424 de novo and 98 device upgrade and new lead placement procedures in 176 women and 346 men (mean age 75±11 years over the period 2012–2015. The recorded procedure-related parameters TFT and DAP were evaluated in the subgroups specified below. The group of 424 de novo procedures included 203 pacemaker (PM and 171 implantable cardioverter-defibrillator (ICD implantation procedures, separately stratified by single-chamber and dual-chamber systems. Another subgroup of de novo procedures involved 50 cardiac resynchronization therapy (CRT devices. The evaluated parameters in the group of 98 upgrade procedures were compared between 2 subgroups: CRT only and combined PM and ICD implantation procedures. Results: We observed differences in TFT and DAP values between procedure types, with PM-related procedures showing the lowest, ICD – intermediate (with values for single-chamber considerably lower than those for dual-chamber systems and CRT implantation procedures – highest X-ray exposure. Upgrades to CRT were associated with 4 times higher TFT and DAP values in comparison to those during other upgrade procedures. Cardiac resynchronization therapy de novo implantation procedures and upgrades to CRT showed similar mean values of these evaluated parameters. Conclusions: Total fluoroscopy time and DAP values correlated progressively with CIED implantation procedure complexity, with CRT-related procedures showing the highest values of both parameters. Med Pr 2017;68(3:363–374

  3. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    of the selectivities of the constituent predicates. However, this independence assumption is more often than not wrong, and is considered to be the most common cause of sub-optimal query execution plans chosen by modern query optimizers. We take a step towards a principled and practical approach to performing...... cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss......Query optimizers rely on statistical models that succinctly describe the underlying data. Models are used to derive cardinality estimates for intermediate relations, which in turn guide the optimizer to choose the best query execution plan. The quality of the resulting plan is highly dependent...

  4. The Markowitz model for portfolio selection

    Directory of Open Access Journals (Sweden)

    MARIAN ZUBIA ZUBIAURRE

    2002-06-01

    Full Text Available Since its first appearance, The Markowitz model for portfolio selection has been a basic theoretical reference, opening several new development options. However, practically it has not been used among portfolio managers and investment analysts in spite of its success in the theoretical field. With our paper we would like to show how The Markowitz model may be of great help in real stock markets. Through an empirical study we want to verify the capability of Markowitz’s model to present portfolios with higher profitability and lower risk than the portfolio represented by IBEX-35 and IGBM indexes. Furthermore, we want to test suggested efficiency of these indexes as representatives of market theoretical-portfolio.

  5. Information criteria for astrophysical model selection

    CERN Document Server

    Liddle, A R

    2007-01-01

    Model selection is the problem of distinguishing competing models, perhaps featuring different numbers of parameters. The statistics literature contains two distinct sets of tools, those based on information theory such as the Akaike Information Criterion (AIC), and those on Bayesian inference such as the Bayesian evidence and Bayesian Information Criterion (BIC). The Deviance Information Criterion combines ideas from both heritages; it is readily computed from Monte Carlo posterior samples and, unlike the AIC and BIC, allows for parameter degeneracy. I describe the properties of the information criteria, and as an example compute them from WMAP3 data for several cosmological models. I find that at present the information theory and Bayesian approaches give significantly different conclusions from that data.

  6. Entropic Priors and Bayesian Model Selection

    CERN Document Server

    Brewer, Brendon J

    2009-01-01

    We demonstrate that the principle of maximum relative entropy (ME), used judiciously, can ease the specification of priors in model selection problems. The resulting effect is that models that make sharp predictions are disfavoured, weakening the usual Bayesian "Occam's Razor". This is illustrated with a simple example involving what Jaynes called a "sure thing" hypothesis. Jaynes' resolution of the situation involved introducing a large number of alternative "sure thing" hypotheses that were possible before we observed the data. However, in more complex situations, it may not be possible to explicitly enumerate large numbers of alternatives. The entropic priors formalism produces the desired result without modifying the hypothesis space or requiring explicit enumeration of alternatives; all that is required is a good model for the prior predictive distribution for the data. This idea is illustrated with a simple rigged-lottery example, and we outline how this idea may help to resolve a recent debate amongst ...

  7. Appropriate model selection methods for nonstationary generalized extreme value models

    Science.gov (United States)

    Kim, Hanbeen; Kim, Sooyoung; Shin, Hongjoon; Heo, Jun-Haeng

    2017-04-01

    Several evidences of hydrologic data series being nonstationary in nature have been found to date. This has resulted in the conduct of many studies in the area of nonstationary frequency analysis. Nonstationary probability distribution models involve parameters that vary over time. Therefore, it is not a straightforward process to apply conventional goodness-of-fit tests to the selection of an appropriate nonstationary probability distribution model. Tests that are generally recommended for such a selection include the Akaike's information criterion (AIC), corrected Akaike's information criterion (AICc), Bayesian information criterion (BIC), and likelihood ratio test (LRT). In this study, the Monte Carlo simulation was performed to compare the performances of these four tests, with regard to nonstationary as well as stationary generalized extreme value (GEV) distributions. Proper model selection ratios and sample sizes were taken into account to evaluate the performances of all the four tests. The BIC demonstrated the best performance with regard to stationary GEV models. In case of nonstationary GEV models, the AIC proved to be better than the other three methods, when relatively small sample sizes were considered. With larger sample sizes, the AIC, BIC, and LRT presented the best performances for GEV models which have nonstationary location and/or scale parameters, respectively. Simulation results were then evaluated by applying all four tests to annual maximum rainfall data of selected sites, as observed by the Korea Meteorological Administration.

  8. Efficiency of model selection criteria in flood frequency analysis

    Science.gov (United States)

    Calenda, G.; Volpi, E.

    2009-04-01

    The estimation of high flood quantiles requires the extrapolation of the probability distributions far beyond the usual sample length, involving high estimation uncertainties. The choice of the probability law, traditionally based on the hypothesis testing, is critical to this point. In this study the efficiency of different model selection criteria, seldom applied in flood frequency analysis, is investigated. The efficiency of each criterion in identifying the probability distribution of the hydrological extremes is evaluated by numerical simulations for different parent distributions, coefficients of variation and skewness, and sample sizes. The compared model selection procedures are the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), the Anderson Darling Criterion (ADC) recently discussed by Di Baldassarre et al. (2008) and Sample Quantile Criterion (SQC), recently proposed by the authors (Calenda et al., 2009). The SQC is based on the principle of maximising the probability density of the elements of the sample that are considered relevant to the problem, and takes into account both the accuracy and the uncertainty of the estimate. Since the stress is mainly on extreme events, the SQC involves upper-tail probabilities, where the effect of the model assumption is more critical. The proposed index is equal to the sum of logarithms of the inverse of the sample probability density of the observed quantiles. The definition of this index is based on the principle that the more centred is the sample value in respect to its density distribution (accuracy of the estimate) and the less spread is this distribution (uncertainty of the estimate), the greater is the probability density of the sample quantile. Thus, lower values of the index indicate a better performance of the distribution law. This criterion can operate the selection of the optimum distribution among competing probability models that are estimated using different samples. The

  9. Ancestral process and diffusion model with selection

    CERN Document Server

    Mano, Shuhei

    2008-01-01

    The ancestral selection graph in population genetics introduced by Krone and Neuhauser (1997) is an analogue to the coalescent genealogy. The number of ancestral particles, backward in time, of a sample of genes is an ancestral process, which is a birth and death process with quadratic death and linear birth rate. In this paper an explicit form of the number of ancestral particle is obtained, by using the density of the allele frequency in the corresponding diffusion model obtained by Kimura (1955). It is shown that fixation is convergence of the ancestral process to the stationary measure. The time to fixation of an allele is studied in terms of the ancestral process.

  10. River water quality model no. 1 (RWQM1): III. Biochemical submodel selection

    DEFF Research Database (Denmark)

    Vanrolleghem, P.; Borchardt, D.; Henze, Mogens

    2001-01-01

    The new River Water Quality Model no.1 introduced in the two accompanying papers by Shanahan et al. and Reichert et al. is comprehensive. Shanahan et al. introduced a six-step decision procedure to select the necessary model features for a certain application. This paper specifically addresses on...

  11. River water quality model no. 1 (RWQM1): III. Biochemical submodel selection

    DEFF Research Database (Denmark)

    Vanrolleghem, P.; Borchardt, D.; Henze, Mogens;

    2001-01-01

    The new River Water Quality Model no.1 introduced in the two accompanying papers by Shanahan et al. and Reichert et al. is comprehensive. Shanahan et al. introduced a six-step decision procedure to select the necessary model features for a certain application. This paper specifically addresses one...

  12. Procedure for identifying models for the heat dynamics of buildings

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik

    This report describes a new method for obtaining detailed information about the heat dynamics of a building using frequent reading of the heat consumption. Such a procedure is considered to be of uttermost importance as a key procedure for using readings from smart meters, which is expected...... to be installed in almost all buildings in the coming years....

  13. Improving randomness characterization through Bayesian model selection

    CERN Document Server

    R., Rafael Díaz-H; Martínez, Alí M Angulo; U'Ren, Alfred B; Hirsch, Jorge G; Marsili, Matteo; Castillo, Isaac Pérez

    2016-01-01

    Nowadays random number generation plays an essential role in technology with important applications in areas ranging from cryptography, which lies at the core of current communication protocols, to Monte Carlo methods, and other probabilistic algorithms. In this context, a crucial scientific endeavour is to develop effective methods that allow the characterization of random number generators. However, commonly employed methods either lack formality (e.g. the NIST test suite), or are inapplicable in principle (e.g. the characterization derived from the Algorithmic Theory of Information (ATI)). In this letter we present a novel method based on Bayesian model selection, which is both rigorous and effective, for characterizing randomness in a bit sequence. We derive analytic expressions for a model's likelihood which is then used to compute its posterior probability distribution. Our method proves to be more rigorous than NIST's suite and the Borel-Normality criterion and its implementation is straightforward. We...

  14. 34 CFR 79.6 - What procedures apply to the selection of programs and activities under these regulations?

    Science.gov (United States)

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false What procedures apply to the selection of programs and activities under these regulations? 79.6 Section 79.6 Education Office of the Secretary, Department of Education INTERGOVERNMENTAL REVIEW OF DEPARTMENT OF EDUCATION PROGRAMS AND ACTIVITIES § 79.6 What...

  15. Role of maturity timing in selection procedures and in the specialisation of playing positions in youth basketball

    NARCIS (Netherlands)

    te Wierike, Sanne Cornelia Maria; Elferink-Gemser, Marije Titia; Tromp, Eveline Jenny Yvonne; Vaeyens, Roel; Visscher, Chris

    2015-01-01

    This study investigated the role of maturity timing in selection procedures and in the specialisation of playing positions in youth male basketball. Forty-three talented Dutch players (14.66 +/- 1.09years) participated in this study. Maturity timing (age at peak height velocity), anthropometric, phy

  16. Role of maturity timing in selection procedures and in the specialisation of playing positions in youth basketball

    NARCIS (Netherlands)

    te Wierike, Sanne Cornelia Maria; Elferink-Gemser, Marije Titia; Tromp, Eveline Jenny Yvonne; Vaeyens, Roel; Visscher, Chris

    2015-01-01

    This study investigated the role of maturity timing in selection procedures and in the specialisation of playing positions in youth male basketball. Forty-three talented Dutch players (14.66 +/- 1.09years) participated in this study. Maturity timing (age at peak height velocity), anthropometric, phy

  17. Mimic expert judgement through automated procedure for selecting rainfall events responsible for shallow landslide: A statistical approach to validation

    Science.gov (United States)

    Vessia, Giovanna; Pisano, Luca; Vennari, Carmela; Rossi, Mauro; Parise, Mario

    2016-01-01

    This paper proposes an automated method for the selection of rainfall data (duration, D, and cumulated, E), responsible for shallow landslide initiation. The method mimics an expert person identifying D and E from rainfall records through a manual procedure whose rules are applied according to her/his judgement. The comparison between the two methods is based on 300 D-E pairs drawn from temporal rainfall data series recorded in a 30 days time-lag before the landslide occurrence. Statistical tests, employed on D and E samples considered both paired and independent values to verify whether they belong to the same population, show that the automated procedure is able to replicate the expert pairs drawn by the expert judgment. Furthermore, a criterion based on cumulated distribution functions (CDFs) is proposed to select the most related D-E pairs to the expert one among the 6 drawn from the coded procedure for tracing the empirical rainfall threshold line.

  18. A recruitment and selection process model: the case of the Department of Justice and Constitutional Development

    OpenAIRE

    Thebe, T P; 12330841 - Van der Waldt, Gerrit

    2014-01-01

    The purpose of this article is to report on findings of an empirical investigation conducted at the Department of Justice and Constitutional Development. The aim of the investigation was to ascertain the status of current practices and challenges regarding the processes and procedures utilised for recruitment and selection. Based on these findings the article further outlines the design of a comprehensive process model for human resource recruitment and selection for the Department. The model...

  19. Inflation Model Selection meets Dark Radiation

    CERN Document Server

    Tram, Thomas; Vennin, Vincent

    2016-01-01

    We investigate how inflation model selection is affected by the presence of additional free-streaming relativistic degrees of freedom, i.e. dark radiation. We perform a full Bayesian analysis of both inflation parameters and cosmological parameters taking reheating into account self-consistently. We compute the Bayesian evidence for a few representative inflation scenarios in both the standard $\\Lambda\\mathrm{CDM}$ model and an extension including dark radiation parametrised by its effective number of relativistic species $N_\\mathrm{eff}$. We find that the observational status of most inflationary models is unchanged, with the exception of potentials such as power-law inflation that predict a value for the scalar spectral index that is too large in $\\Lambda\\mathrm{CDM}$ but which can be accommodated when $N_\\mathrm{eff}$ is allowed to vary. In this case, cosmic microwave background data indicate that power-law inflation is one of the best models together with plateau potentials. However, contrary to plateau p...

  20. [The effect of selected physical procedures on mobility in women with rheumatoid arthritis].

    Science.gov (United States)

    Leśniewicz, Joanna; Pieszyński, Ireneusz; Zboralski, Krzysztof; Florkowski, Antoni

    2014-12-01

    Electrotherapy, including iontophoresis and magnetic field, is one of the most commonly used physical procedures in the treatment of rheumatoid arthritis (RS). To evaluate the effect of iontophoresis and magnetic field procedures on the intensity and frequency of pain sensation, administration of analgesics, limitation of knee joint mobility and comparative evaluation of analgesic effect s of the applied procedures. The study included a group of 60 female patients affected by RS with knee joint pain. Patients were randomly assigned to 3 equally-numbered groups. Group I was subjected to 20 iontophoresis procedures. Group II underwent 20 procedures with magnetic field. Group III was treated with 20 procedures combining both iontophoresis and magnetic field. Each iontophoresis procedure lasted 20 minutes, whereas the magnetic field procedure took 30 minutes. All study participants were evaluated in relation to pain sensation after and before the treatment with VAS (Visual Analogue Scale) and Latinen scale. After a 4-week therapy in all the three groups there was a statistically significant decrease in pain perception with VAS scale and with all domains of Laitinen scale excluding the limitation of physical activity criterion. The comparative evaluation of statistically important differences after the therapy between the groups revealed marked decrease of pain perception in groups I and II comparing to group II. There were no significant differences between groups I and III. Iontophoresis and magnetic field treatments demonstrate effective analgesic property in female patients with rheumatoid arthritis. The conducted studies showed the highest analgesic effects for both treatments used.

  1. Using GOMS models and hypertext to create representations of medical procedures for online display

    Science.gov (United States)

    Gugerty, Leo; Halgren, Shannon; Gosbee, John; Rudisill, Marianne

    1991-01-01

    This study investigated two methods to improve organization and presentation of computer-based medical procedures. A literature review suggested that the GOMS (goals, operators, methods, and selecton rules) model can assist in rigorous task analysis, which can then help generate initial design ideas for the human-computer interface. GOMS model are hierarchical in nature, so this study also investigated the effect of hierarchical, hypertext interfaces. We used a 2 x 2 between subjects design, including the following independent variables: procedure organization - GOMS model based vs. medical-textbook based; navigation type - hierarchical vs. linear (booklike). After naive subjects studies the online procedures, measures were taken of their memory for the content and the organization of the procedures. This design was repeated for two medical procedures. For one procedure, subjects who studied GOMS-based and hierarchical procedures remembered more about the procedures than other subjects. The results for the other procedure were less clear. However, data for both procedures showed a 'GOMSification effect'. That is, when asked to do a free recall of a procedure, subjects who had studies a textbook procedure often recalled key information in a location inconsistent with the procedure they actually studied, but consistent with the GOMS-based procedure.

  2. Forecasting macroeconomic variables using neural network models and three automated model selection techniques

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    2016-01-01

    When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. To alleviate the problem, White (2006) presented a solution (Quick......Net) that converts the specification and nonlinear estimation problem into a linear model selection and estimation problem. We shall compare its performance to that of two other procedures building on the linearization idea: the Marginal Bridge Estimator and Autometrics. Second, one must decide whether forecasting...

  3. High-dimensional model estimation and model selection

    CERN Document Server

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  4. Fuzzy modelling for selecting headgear types.

    Science.gov (United States)

    Akçam, M Okan; Takada, Kenji

    2002-02-01

    The purpose of this study was to develop a computer-assisted inference model for selecting appropriate types of headgear appliance for orthodontic patients and to investigate its clinical versatility as a decision-making aid for inexperienced clinicians. Fuzzy rule bases were created for degrees of overjet, overbite, and mandibular plane angle variables, respectively, according to subjective criteria based on the clinical experience and knowledge of the authors. The rules were then transformed into membership functions and the geometric mean aggregation was performed to develop the inference model. The resultant fuzzy logic was then tested on 85 cases in which the patients had been diagnosed as requiring headgear appliances. Eight experienced orthodontists judged each of the cases, and decided if they 'agreed', 'accepted', or 'disagreed' with the recommendations of the computer system. Intra-examiner agreements were investigated using repeated judgements of a set of 30 orthodontic cases and the kappa statistic. All of the examiners exceeded a kappa score of 0.7, allowing them to participate in the test run of the validity of the proposed inference model. The examiners' agreement with the system's recommendations was evaluated statistically. The average satisfaction rate of the examiners was 95.6 per cent and, for 83 out of the 85 cases, 97.6 per cent. The majority of the examiners (i.e. six or more out of the eight) were satisfied with the recommendations of the system. Thus, the usefulness of the proposed inference logic was confirmed.

  5. SLAM: A Connectionist Model for Attention in Visual Selection Tasks.

    Science.gov (United States)

    Phaf, R. Hans; And Others

    1990-01-01

    The SeLective Attention Model (SLAM) performs visual selective attention tasks and demonstrates that object selection and attribute selection are both necessary and sufficient for visual selection. The SLAM is described, particularly with regard to its ability to represent an individual subject performing filtering tasks. (TJH)

  6. Analysis of routine and novel sperm selection methods for the treatment of infertile patients undergoing ICSI procedure

    Directory of Open Access Journals (Sweden)

    Leila Azadi

    2014-06-01

    Full Text Available Background: Successful outcome of Assisted Reproductive Techniques (ART depends on many factors such as sperm preparation methods, embryo quality and factors that may affect implantation. In Intra-cytoplasmic Sperm Injection (ICSI, sperm selection is mainly based on sperm motility and morphology; however, studies have revealed that these parameters cannot guarantee the genomic health. Recently, researchers and scientific have focused on new sperm selection methods based on cellular and molecular properties. Therefore, the aim of this review article was to introduce the routine and novel sperm selection methods, advantages and disadvantages of these methods, and their clinical analysis. Methods: The papers related to routine and novel sperm selection methods based on function tests and clinical outcomes were retrieved from PubMed and Entrez databases and other ISI-related databases. Results: Novel sperm selection methods which are based on selection of a single sperm (like IMSI and ability of sperm to bind to zona are time-consuming and costly. In addition, assessment of DNA fragmentation is difficult for the selected sperm. However, methods that select a population of spermatozoa like Zeta are less time-consuming and suitable for assessment of sperm chromatin integrity. Conclusion: In clinical applications, simultaneous use of traditional and novel approaches may improve ICSI outcome. However, further studies are needed to select an appropriate sperm selection procedure.

  7. Estimation and Model Selection for Model-Based Clustering with the Conditional Classification Likelihood

    CERN Document Server

    Baudry, Jean-Patrick

    2012-01-01

    The Integrated Completed Likelihood (ICL) criterion has been proposed by Biernacki et al. (2000) in the model-based clustering framework to select a relevant number of classes and has been used by statisticians in various application areas. A theoretical study of this criterion is proposed. A contrast related to the clustering objective is introduced: the conditional classification likelihood. This yields an estimator and a model selection criteria class. The properties of these new procedures are studied and ICL is proved to be an approximation of one of these criteria. We oppose these results to the current leading point of view about ICL, that it would not be consistent. Moreover these results give insights into the class notion underlying ICL and feed a reflection on the class notion in clustering. General results on penalized minimum contrast criteria and on mixture models are derived, which are interesting in their own right.

  8. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  9. A rapid and robust selection procedure for generating drug-selectable marker-free recombinant malaria parasites.

    Science.gov (United States)

    Manzoni, Giulia; Briquet, Sylvie; Risco-Castillo, Veronica; Gaultier, Charlotte; Topçu, Selma; Ivănescu, Maria Larisa; Franetich, Jean-François; Hoareau-Coudert, Bénédicte; Mazier, Dominique; Silvie, Olivier

    2014-04-23

    Experimental genetics have been widely used to explore the biology of the malaria parasites. The rodent parasites Plasmodium berghei and less frequently P. yoelii are commonly utilised, as their complete life cycle can be reproduced in the laboratory and because they are genetically tractable via homologous recombination. However, due to the limited number of drug-selectable markers, multiple modifications of the parasite genome are difficult to achieve and require large numbers of mice. Here we describe a novel strategy that combines positive-negative drug selection and flow cytometry-assisted sorting of fluorescent parasites for the rapid generation of drug-selectable marker-free P. berghei and P. yoelii mutant parasites expressing a GFP or a GFP-luciferase cassette, using minimal numbers of mice. We further illustrate how this new strategy facilitates phenotypic analysis of genetically modified parasites by fluorescence and bioluminescence imaging of P. berghei mutants arrested during liver stage development.

  10. A competency based selection procedure for Dutch postgraduate GP training: a pilot study on validity and reliability.

    Science.gov (United States)

    Vermeulen, Margit I; Tromp, Fred; Zuithoff, Nicolaas P A; Pieters, Ron H M; Damoiseaux, Roger A M J; Kuyvenhoven, Marijke M

    2014-12-01

    Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the competencies that are required to complete GP training. The objective was to explore reliability and validity aspects of the instruments developed. The new selection procedure comprising the National GP Knowledge Test (LHK), a situational judgement tests (SJT), a patterned behaviour descriptive interview (PBDI) and a simulated encounter (SIM) was piloted alongside the current procedure. Forty-seven candidates volunteered in both procedures. Admission decision was based on the results of the current procedure. Study participants did hardly differ from the other candidates. The mean scores of the candidates on the LHK and SJT were 21.9 % (SD 8.7) and 83.8% (SD 3.1), respectively. The mean self-reported competency scores (PBDI) were higher than the observed competencies (SIM): 3.7(SD 0.5) and 2.9(SD 0.6), respectively. Content-related competencies showed low correlations with one another when measured with different instruments, whereas more diverse competencies measured by a single instrument showed strong to moderate correlations. Moreover, a moderate correlation between LHK and SJT was found. The internal consistencies (intraclass correlation, ICC) of LHK and SJT were poor while the ICC of PBDI and SIM showed acceptable levels of reliability. Findings on content validity and reliability of these new instruments are promising to realize a competency based procedure. Further development of the instruments and research on predictive validity should be pursued.

  11. A visual graphic/haptic rendering model for hysteroscopic procedures.

    Science.gov (United States)

    Lim, Fabian; Brown, Ian; McColl, Ryan; Seligman, Cory; Alsaraira, Amer

    2006-03-01

    Hysteroscopy is an extensively popular option in evaluating and treating women with infertility. The procedure utilises an endoscope, inserted through the vagina and cervix to examine the intra-uterine cavity via a monitor. The difficulty of hysteroscopy from the surgeon's perspective is the visual spatial perception of interpreting 3D images on a 2D monitor, and the associated psychomotor skills in overcoming the fulcrum-effect. Despite the widespread use of this procedure, current qualified hysteroscopy surgeons have not been trained the fundamentals through an organised curriculum. The emergence of virtual reality as an educational tool for this procedure, and for other endoscopic procedures, has undoubtedly raised interests. The ultimate objective is for the inclusion of virtual reality training as a mandatory component for gynaecologic endoscopy training. Part of this process involves the design of a simulator, encompassing the technical difficulties and complications associated with the procedure. The proposed research examines fundamental hysteroscopy factors, current training and accreditation, and proposes a hysteroscopic simulator design that is suitable for educating and training.

  12. Full-Wave Analysis of Stable Cross Fractal Frequency Selective Surfaces Using an Iterative Procedure Based on Wave Concept

    Directory of Open Access Journals (Sweden)

    V. P. Silva Neto

    2015-01-01

    Full Text Available This work presents a full-wave analysis of stable frequency selective surfaces (FSSs composed of periodic arrays of cross fractal patch elements. The shapes of these patch elements are defined conforming to a fractal concept, where the generator fractal geometry is successively subdivided into parts which are smaller copies of the previous ones (defined as fractal levels. The main objective of this work is to investigate the performance of FSSs with cross fractal patch element geometries including their frequency response and stability in relation to both the angle of incidence and polarization of the plane wave. The frequency response of FSS structures is obtained using the wave concept iterative procedure (WCIP. This method is based on a wave concept formulation and the boundary conditions for the FSS structure. Prototypes were manufactured and measured to verify the WCIP model accuracy. A good agreement between WCIP and measured results was observed for the proposed cross fractal FSSs. In addition, these FSSs exhibited good angular stability.

  13. A Comparison of Exposure Control Procedures in CATs Using the 3PL Model

    Science.gov (United States)

    Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G.

    2013-01-01

    This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…

  14. Hidden Markov Model for Stock Selection

    Directory of Open Access Journals (Sweden)

    Nguyet Nguyen

    2015-10-01

    Full Text Available The hidden Markov model (HMM is typically used to predict the hidden regimes of observation data. Therefore, this model finds applications in many different areas, such as speech recognition systems, computational molecular biology and financial market predictions. In this paper, we use HMM for stock selection. We first use HMM to make monthly regime predictions for the four macroeconomic variables: inflation (consumer price index (CPI, industrial production index (INDPRO, stock market index (S&P 500 and market volatility (VIX. At the end of each month, we calibrate HMM’s parameters for each of these economic variables and predict its regimes for the next month. We then look back into historical data to find the time periods for which the four variables had similar regimes with the forecasted regimes. Within those similar periods, we analyze all of the S&P 500 stocks to identify which stock characteristics have been well rewarded during the time periods and assign scores and corresponding weights for each of the stock characteristics. A composite score of each stock is calculated based on the scores and weights of its features. Based on this algorithm, we choose the 50 top ranking stocks to buy. We compare the performances of the portfolio with the benchmark index, S&P 500. With an initial investment of $100 in December 1999, over 15 years, in December 2014, our portfolio had an average gain per annum of 14.9% versus 2.3% for the S&P 500.

  15. Procedure to Determine Coefficients for the Sandia Array Performance Model (SAPM)

    Energy Technology Data Exchange (ETDEWEB)

    King, Bruce Hardison; Hansen, Clifford; Riley, Daniel; Robinson, Charles David; Pratt, Larry

    2016-06-01

    The Sandia Array Performance Model (SAPM), a semi-empirical model for predicting PV system power, has been in use for more than a decade. While several studies have presented comparisons of measurements and analysis results among laboratories, detailed procedures for determining model coefficients have not yet been published. Independent test laboratories must develop in-house procedures to determine SAPM coefficients, which contributes to uncertainty in the resulting models. Here we present a standard procedure for calibrating the SAPM using outdoor electrical and meteorological measurements. Analysis procedures are illustrated with data measured outdoors for a 36-cell silicon photovoltaic module.

  16. A rapid and robust selection procedure for generating drug-selectable marker-free recombinant malaria parasites

    OpenAIRE

    2014-01-01

    International audience; Experimental genetics have been widely used to explore the biology of the malaria parasites. The rodent parasites Plasmodium berghei and less frequently P. yoelii are commonly utilised, as their complete life cycle can be reproduced in the laboratory and because they are genetically tractable via homologous recombination. However, due to the limited number of drug-selectable markers, multiple modifications of the parasite genome are difficult to achieve and require lar...

  17. General model selection estimation of a periodic regression with a Gaussian noise

    CERN Document Server

    Konev, Victor; 10.1007/s10463-008-0193-1

    2010-01-01

    This paper considers the problem of estimating a periodic function in a continuous time regression model with an additive stationary gaussian noise having unknown correlation function. A general model selection procedure on the basis of arbitrary projective estimates, which does not need the knowledge of the noise correlation function, is proposed. A non-asymptotic upper bound for quadratic risk (oracle inequality) has been derived under mild conditions on the noise. For the Ornstein-Uhlenbeck noise the risk upper bound is shown to be uniform in the nuisance parameter. In the case of gaussian white noise the constructed procedure has some advantages as compared with the procedure based on the least squares estimates (LSE). The asymptotic minimaxity of the estimates has been proved. The proposed model selection scheme is extended also to the estimation problem based on the discrete data applicably to the situation when high frequency sampling can not be provided.

  18. A Survey of Procedural Methods for Terrain Modelling

    NARCIS (Netherlands)

    Smelik, R.M.; Kraker, J.K. de; Groenewegen, S.A.; Tutenel, T.; Bidarra, R.

    2009-01-01

    Procedural methods are a promising but underused alternative to manual content creation. Commonly heard drawbacks are the randomness of and the lack of control over the output and the absence of integrated solutions, although more recent publications increasingly address these issues. This paper sur

  19. Automated Neuropsychological Assessment Metrics, Version 4 (ANAM4): Examination of Select Psychometric Properties and Administration Procedures

    Science.gov (United States)

    2016-12-01

    1 Award Number: W81XWH-08-1-0021 TITLE: Automated Neuropsychological Assessment Metrics, Version 4 (ANAM4): Examination of Select Psychometric ...NUMBER W81XWH-08-1-0021 Automated Neuropsychological Assessment Metrics, Version 4 (ANAM4): Examination of Select Psychometric Properties and...testing scenarios. The primary objective of this project is to examine select psychometric and administration properties of the ANAM4. Four studies were

  20. Procedure for assessing the performance of a rockfall fragmentation model

    Science.gov (United States)

    Matas, Gerard; Lantada, Nieves; Corominas, Jordi; Gili, Josep Antoni; Ruiz-Carulla, Roger; Prades, Albert

    2017-04-01

    A Rockfall is a mass instability process frequently observed in road cuts, open pit mines and quarries, steep slopes and cliffs. It is frequently observed that the detached rock mass becomes fragmented when it impacts with the slope surface. The consideration of the fragmentation of the rockfall mass is critical for the calculation of block's trajectories and their impact energies, to further assess their potential to cause damage and design adequate preventive structures. We present here the performance of the RockGIS model. It is a GIS-Based tool that simulates stochastically the fragmentation of the rockfalls, based on a lumped mass approach. In RockGIS, the fragmentation initiates by the disaggregation of the detached rock mass through the pre-existing discontinuities just before the impact with the ground. An energy threshold is defined in order to determine whether the impacting blocks break or not. The distribution of the initial mass between a set of newly generated rock fragments is carried out stochastically following a power law. The trajectories of the new rock fragments are distributed within a cone. The model requires the calibration of both the runout of the resultant blocks and the spatial distribution of the volumes of fragments generated by breakage during their propagation. As this is a coupled process which is controlled by several parameters, a set of performance criteria to be met by the simulation have been defined. The criteria includes: position of the centre of gravity of the whole block distribution, histogram of the runout of the blocks, extent and boundaries of the young debris cover over the slope surface, lateral dispersion of trajectories, total number of blocks generated after fragmentation, volume distribution of the generated fragments, the number of blocks and volume passages past a reference line and the maximum runout distance Since the number of parameters to fit increases significantly when considering fragmentation, the

  1. The Tabu Search Procedure: An Alternative to the Variable Selection Methods

    Science.gov (United States)

    Mills, Jamie, D.; Olejnik, Stephen, F.; Marcoulides, George, A.

    2005-01-01

    The effectiveness of the Tabu variable selection algorithm, to identify predictor variables related to a criterion variable, is compared with the stepwise variable selection method and the all possible regression approach. Considering results obtained from previous research, Tabu is more successful in identifying relevant variables than the…

  2. Selecting informative food items for compiling food-frequency questionnaires: Comparison of procedures

    NARCIS (Netherlands)

    Molag, M.L.; Vries, J.H.M. de; Duif, N.; Ocké, M.C.; Dagnelie, P.C.; Goldbohm, R.A.; Veer, P. van 't

    2010-01-01

    The authors automated the selection of foods in a computer system that compiles and processes tailored FFQ. For the selection of food items, several methods are available. The aim of the present study was to compare food lists made by MOM2, which identifies food items with highest between-person var

  3. On dynamic selection of households for direct marketing based on Markov chain models with memory

    NARCIS (Netherlands)

    Otter, Pieter W.

    2007-01-01

    A simple, dynamic selection procedure is proposed, based on conditional, expected profits using Markov chain models with memory. The method is easy to apply, only frequencies and mean values have to be calculated or estimated. The method is empirically illustrated using a data set from a charitable

  4. A bootstrap procedure to select hyperspectral wavebands related to tannin content

    NARCIS (Netherlands)

    Ferwerda, J.G.; Skidmore, A.K.; Stein, A.

    2006-01-01

    Detection of hydrocarbons in plants with hyperspectral remote sensing is hampered by overlapping absorption pits, while the `optimal' wavebands for detecting some surface characteristics (e.g. chlorophyll, lignin, tannin) may shift. We combined a phased regression with a bootstrap procedure to find

  5. Comparative studies on hearing aid selection and fitting procedures : a review of the literature

    NARCIS (Netherlands)

    Metselaar, Mick; Maat, Bert; Verschuure, Hans; Dreschler, Wouter A; Feenstra, Louw

    2008-01-01

    Although a large number of fitting procedures have been developed and are nowadays generally applied in modern hearing aid fitting technology, little is known about their effectiveness in comparison with each other. This paper argues the need for comparative validation studies on hearing aid fitting

  6. Evaluating procedural modelling for 3D models of informal settlements in urban design activities

    Directory of Open Access Journals (Sweden)

    Victoria Rautenbach

    2015-11-01

    Full Text Available Three-dimensional (3D modelling and visualisation is one of the fastest growing application fields in geographic information science. 3D city models are being researched extensively for a variety of purposes and in various domains, including urban design, disaster management, education and computer gaming. These models typically depict urban business districts (downtown or suburban residential areas. Despite informal settlements being a prevailing feature of many cities in developing countries, 3D models of informal settlements are virtually non-existent. 3D models of informal settlements could be useful in various ways, e.g. to gather information about the current environment in the informal settlements, to design upgrades, to communicate these and to educate inhabitants about environmental challenges. In this article, we described the development of a 3D model of the Slovo Park informal settlement in the City of Johannesburg Metropolitan Municipality, South Africa. Instead of using time-consuming traditional manual methods, we followed the procedural modelling technique. Visualisation characteristics of 3D models of informal settlements were described and the importance of each characteristic in urban design activities for informal settlement upgrades was assessed. Next, the visualisation characteristics of the Slovo Park model were evaluated. The results of the evaluation showed that the 3D model produced by the procedural modelling technique is suitable for urban design activities in informal settlements. The visualisation characteristics and their assessment are also useful as guidelines for developing 3D models of informal settlements. In future, we plan to empirically test the use of such 3D models in urban design projects in informal settlements.

  7. The detection of observations possibly influential for model selection

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1991-01-01

    textabstractModel selection can involve several variables and selection criteria. A simple method to detect observations possibly influential for model selection is proposed. The potentials of this method are illustrated with three examples, each of which is taken from related studies.

  8. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence.

    Science.gov (United States)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  9. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    Science.gov (United States)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  10. New Inference Procedures for Semiparametric Varying-Coefficient Partially Linear Cox Models

    Directory of Open Access Journals (Sweden)

    Yunbei Ma

    2014-01-01

    Full Text Available In biomedical research, one major objective is to identify risk factors and study their risk impacts, as this identification can help clinicians to both properly make a decision and increase efficiency of treatments and resource allocation. A two-step penalized-based procedure is proposed to select linear regression coefficients for linear components and to identify significant nonparametric varying-coefficient functions for semiparametric varying-coefficient partially linear Cox models. It is shown that the penalized-based resulting estimators of the linear regression coefficients are asymptotically normal and have oracle properties, and the resulting estimators of the varying-coefficient functions have optimal convergence rates. A simulation study and an empirical example are presented for illustration.

  11. A Long-Term Memory Competitive Process Model of a Common Procedural Error

    Science.gov (United States)

    2013-08-01

    A novel computational cognitive model explains human procedural error in terms of declarative memory processes. This is an early version of a process ... model intended to predict and explain multiple classes of procedural error a priori. We begin with postcompletion error (PCE) a type of systematic

  12. A Procedure for Building Product Models in Intelligent Agent-based OperationsManagement

    DEFF Research Database (Denmark)

    Hvam, Lars; Riis, Jesper; Malis, Martin;

    2003-01-01

    by product models. The next phase includes an analysis of the product assortment, and the set up of a so-called product master. Finally the product model is designed and implemented by using object oriented modelling. The procedure is developed in order to ensure that the product models constructed are fit......This article presents a procedure for building product models to support the specification processes dealing with sales, design of product variants and production preparation. The procedure includes, as the first phase, an analysis and redesign of the business processes that are to be supported...

  13. A fast atlas pre-selection procedure for multi-atlas based brain segmentation.

    Science.gov (United States)

    Ma, Jingbo; Ma, Heather T; Li, Hengtong; Ye, Chenfei; Wu, Dan; Tang, Xiaoying; Miller, Michael; Mori, Susumu

    2015-01-01

    Multi-atlas based MR image segmentation has been recognized as a quantitative analysis approach for brain. For such purpose, atlas databases keep increasing to include various anatomical characteristics of human brain. Atlas pre-selection becomes a necessary step for efficient and accurate automated segmentation of human brain images. In this study, we proposed a method of atlas pre-selection for target image segmentation on the MriCloud platform, which is a state-of-the-art multi-atlas based segmentation tool. In the MRIcloud pipeline, segmentation of lateral ventricle (LV) label is generated as an additional input in the segmentation pipeline. Under this circumstance, similarity of the LV label between target image and atlases was adopted as the atlas ranking scheme. Dice overlap coefficient was calculated and taken as the quantitative measure for atlas ranking. Segmentation results based on the proposed method were compared with that based on atlas pre-selection by mutual information (MI) between images. The final segmentation results showed a comparable accuracy of the proposed method with that from MI based atlas pre-selection. However, the computation load for the atlas pre-selection was speeded up by about 20 times compared to MI based pre-selection. The proposed method provides a promising assistance for quantitative analysis of brain images.

  14. New procedure of selected biogenic amines determination in wine samples by HPLC

    Energy Technology Data Exchange (ETDEWEB)

    Piasta, Anna M.; Jastrzębska, Aneta, E-mail: aj@chem.uni.torun.pl; Krzemiński, Marek P.; Muzioł, Tadeusz M.; Szłyk, Edward

    2014-06-27

    Highlights: • We proposed new procedure for derivatization of biogenic amines. • The NMR and XRD analysis confirmed the purity and uniqueness of derivatives. • Concentration of biogenic amines in wine samples were analyzed by RP-HPLC. • Sample contamination and derivatization reactions interferences were minimized. - Abstract: A new procedure for determination of biogenic amines (BA): histamine, phenethylamine, tyramine and tryptamine, based on the derivatization reaction with 2-chloro-1,3-dinitro-5-(trifluoromethyl)-benzene (CNBF), is proposed. The amines derivatives with CNBF were isolated and characterized by X-ray crystallography and {sup 1}H, {sup 13}C, {sup 19}F NMR spectroscopy in solution. The novelty of the procedure is based on the pure and well-characterized products of the amines derivatization reaction. The method was applied for the simultaneous analysis of the above mentioned biogenic amines in wine samples by the reversed phase-high performance liquid chromatography. The procedure revealed correlation coefficients (R{sup 2}) between 0.9997 and 0.9999, and linear range: 0.10–9.00 mg L{sup −1} (histamine); 0.10–9.36 mg L{sup -1} (tyramine); 0.09–8.64 mg L{sup −1} (tryptamine) and 0.10–8.64 mg L{sup −1} (phenethylamine), whereas accuracy was 97%–102% (recovery test). Detection limit of biogenic amines in wine samples was 0.02–0.03 mg L{sup −1}, whereas quantification limit ranged 0.05–0.10 mg L{sup −1}. The variation coefficients for the analyzed amines ranged between 0.49% and 3.92%. Obtained BA derivatives enhanced separation the analytes on chromatograms due to the inhibition of hydrolysis reaction and the reduction of by-products formation.

  15. METHOD OF PHYSIOTHERAPY MEDICAL PROCEDURES FOR THERMAL IMPACT ON SELECTED AREAS WITH HUMAN HANDS THERMOELECTRIC DEVICES

    Directory of Open Access Journals (Sweden)

    A. B. Sulin

    2015-01-01

    Full Text Available The device for thermal impact on separate zones of a hand of the person executed on the basis of thermoelectric converters of energy is considered. The advantages consisting in high environmental friendliness, noiselessness, reliability, functionality, universality are noted it. The technique of carrying out medical (preventive physiotherapeutic procedures, the hands of the person consisting in contrast thermal impact on a site with various level of heating and cooling, and also lasting expositions is described.

  16. Comparison of Real World Energy Consumption to Models and Department of Energy Test Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Goetzler, William [Navigant Consulting, Inc., Burlington, MA (United States); Sutherland, Timothy [Navigant Consulting, Inc., Burlington, MA (United States); Kar, Rahul [Navigant Consulting, Inc., Burlington, MA (United States); Foley, Kevin [Navigant Consulting, Inc., Burlington, MA (United States)

    2011-09-01

    This study investigated the real-world energy performance of appliances and equipment as it compared with models and test procedures. The study looked to determine whether the U.S. Department of Energy and industry test procedures actually replicate real world conditions, whether performance degrades over time, and whether installation patterns and procedures differ from the ideal procedures. The study first identified and prioritized appliances to be evaluated. Then, the study determined whether real world energy consumption differed substantially from predictions and also assessed whether performance degrades over time. Finally, the study recommended test procedure modifications and areas for future research.

  17. Selective experimental review of the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Bloom, E.D.

    1985-02-01

    Before disussing experimental comparisons with the Standard Model, (S-M) it is probably wise to define more completely what is commonly meant by this popular term. This model is a gauge theory of SU(3)/sub f/ x SU(2)/sub L/ x U(1) with 18 parameters. The parameters are ..cap alpha../sub s/, ..cap alpha../sub qed/, theta/sub W/, M/sub W/ (M/sub Z/ = M/sub W//cos theta/sub W/, and thus is not an independent parameter), M/sub Higgs/; the lepton masses, M/sub e/, M..mu.., M/sub r/; the quark masses, M/sub d/, M/sub s/, M/sub b/, and M/sub u/, M/sub c/, M/sub t/; and finally, the quark mixing angles, theta/sub 1/, theta/sub 2/, theta/sub 3/, and the CP violating phase delta. The latter four parameters appear in the quark mixing matrix for the Kobayashi-Maskawa and Maiani forms. Clearly, the present S-M covers an enormous range of physics topics, and the author can only lightly cover a few such topics in this report. The measurement of R/sub hadron/ is fundamental as a test of the running coupling constant ..cap alpha../sub s/ in QCD. The author will discuss a selection of recent precision measurements of R/sub hadron/, as well as some other techniques for measuring ..cap alpha../sub s/. QCD also requires the self interaction of gluons. The search for the three gluon vertex may be practically realized in the clear identification of gluonic mesons. The author will present a limited review of recent progress in the attempt to untangle such mesons from the plethora q anti q states of the same quantum numbers which exist in the same mass range. The electroweak interactions provide some of the strongest evidence supporting the S-M that exists. Given the recent progress in this subfield, and particularly with the discovery of the W and Z bosons at CERN, many recent reviews obviate the need for further discussion in this report. In attempting to validate a theory, one frequently searches for new phenomena which would clearly invalidate it. 49 references, 28 figures.

  18. An integrated model for supplier selection process

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In today's highly competitive manufacturing environment, the supplier selection process becomes one of crucial activities in supply chain management. In order to select the best supplier(s) it is not only necessary to continuously tracking and benchmarking performance of suppliers but also to make a tradeoff between tangible and intangible factors some of which may conflict. In this paper an integration of case-based reasoning (CBR), analytical network process (ANP) and linear programming (LP) is proposed to solve the supplier selection problem.

  19. Dealing with selection bias in educational transition models

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier

    2011-01-01

    This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational...... transitions to be correlated across transitions. We use simulated and real data to illustrate how the BPSM improves on the traditional Mare model in terms of correcting for selection bias and providing credible estimates of the effect of family background on educational success. We conclude that models which...... account for selection on unobserved variables and high-quality data are both required in order to estimate credible educational transition models....

  20. In Vitro and Ex Vivo Selection Procedures for Identifying Potentially Therapeutic DNA and RNA Molecules

    Directory of Open Access Journals (Sweden)

    Soledad Marton

    2010-06-01

    Full Text Available It was only relatively recently discovered that nucleic acids participate in a variety of biological functions, besides the storage and transmission of genetic information. Quite apart from the nucleotide sequence, it is now clear that the structure of a nucleic acid plays an essential role in its functionality, enabling catalysis and specific binding reactions. In vitro selection and evolution strategies have been extremely useful in the analysis of functional RNA and DNA molecules, helping to expand our knowledge of their functional repertoire and to identify and optimize DNA and RNA molecules with potential therapeutic and diagnostic applications. The great progress made in this field has prompted the development of ex vivo methods for selecting functional nucleic acids in the cellular environment. This review summarizes the most important and most recent applications of in vitro and ex vivo selection strategies aimed at exploring the therapeutic potential of nucleic acids.

  1. Computer-Based Procedures for Field Workers in Nuclear Power Plants: Development of a Model of Procedure Usage and Identification of Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Katya Le Blanc; Johanna Oxstrand

    2012-04-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field workers. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do so. This paper describes the development of a Model of Procedure Use and the qualitative study on which the model is based. The study was conducted in collaboration with four nuclear utilities and five research institutes. During the qualitative study and the model development requirements and for computer-based procedures were identified.

  2. Model for personal computer system selection.

    Science.gov (United States)

    Blide, L

    1987-12-01

    Successful computer software and hardware selection is best accomplished by following an organized approach such as the one described in this article. The first step is to decide what you want to be able to do with the computer. Secondly, select software that is user friendly, well documented, bug free, and that does what you want done. Next, you select the computer, printer and other needed equipment from the group of machines on which the software will run. Key factors here are reliability and compatibility with other microcomputers in your facility. Lastly, you select a reliable vendor who will provide good, dependable service in a reasonable time. The ability to correctly select computer software and hardware is a key skill needed by medical record professionals today and in the future. Professionals can make quality computer decisions by selecting software and systems that are compatible with other computers in their facility, allow for future net-working, ease of use, and adaptability for expansion as new applications are identified. The key to success is to not only provide for your present needs, but to be prepared for future rapid expansion and change in your computer usage as technology and your skills grow.

  3. Decision support model for selecting and evaluating suppliers in the construction industry

    Directory of Open Access Journals (Sweden)

    Fernando Schramm

    2012-12-01

    Full Text Available A structured evaluation of the construction industry's suppliers, considering aspects which make their quality and credibility evident, can be a strategic tool to manage this specific supply chain. This study proposes a multi-criteria decision model for suppliers' selection from the construction industry, as well as an efficient evaluation procedure for the selected suppliers. The model is based on SMARTER (Simple Multi-Attribute Rating Technique Exploiting Ranking method and its main contribution is a new approach to structure the process of suppliers' selection, establishing explicit strategic policies on which the company management system relied to make the suppliers selection. This model was applied to a Civil Construction Company in Brazil and the main results demonstrate the efficiency of the proposed model. This study allowed the development of an approach to Construction Industry which was able to provide a better relationship among its managers, suppliers and partners.

  4. Closed Adaptive Sequential Procedures for Selecting the Best of k or = 2 Bernoulli Populations.

    Science.gov (United States)

    1981-07-01

    of two binomial populations with restricted parameter values. Trabajos Estadist 22, 195-206. [57] Sobel, M. and Weiss, G.H. [1971b]. Play-the-winner...versions of se- quential optimum selection plan with play-the-winner sampling and the stopping rules in a finite population. Res. Rept. 69 (1977) Res

  5. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; van den Berg, Stéphanie Martine

    2017-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  6. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.

    2016-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  7. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.

    2017-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  8. Procedures for adjusting regional regression models of urban-runoff quality using local data

    Science.gov (United States)

    Hoos, A.B.; Sisolak, J.K.

    1993-01-01

    Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for

  9. Chain-Wise Generalization of Road Networks Using Model Selection

    Science.gov (United States)

    Bulatov, D.; Wenzel, S.; Häufel, G.; Meidow, J.

    2017-05-01

    Streets are essential entities of urban terrain and their automatized extraction from airborne sensor data is cumbersome because of a complex interplay of geometric, topological and semantic aspects. Given a binary image, representing the road class, centerlines of road segments are extracted by means of skeletonization. The focus of this paper lies in a well-reasoned representation of these segments by means of geometric primitives, such as straight line segments as well as circle and ellipse arcs. We propose the fusion of raw segments based on similarity criteria; the output of this process are the so-called chains which better match to the intuitive perception of what a street is. Further, we propose a two-step approach for chain-wise generalization. First, the chain is pre-segmented using circlePeucker and finally, model selection is used to decide whether two neighboring segments should be fused to a new geometric entity. Thereby, we consider both variance-covariance analysis of residuals and model complexity. The results on a complex data-set with many traffic roundabouts indicate the benefits of the proposed procedure.

  10. Spatial Statistical Procedures to Validate Input Data in Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Johannesson, G.; Stewart, J.; Barr, C.; Brady Sabeff, L.; George, R.; Heimiller, D.; Milbrandt, A.

    2006-01-01

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the abovementioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  11. Spatial Statistical Procedures to Validate Input Data in Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence Livermore National Laboratory

    2006-01-27

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy-related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the above-mentioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  12. Ultrasound in the diagnosis and treatment of developmental dysplasia of the hip. Evaluation of a selective screening procedure

    DEFF Research Database (Denmark)

    Strandberg, C.; Konradsen, L.A.; Ellitsgaard, N.

    2008-01-01

    INTRODUCTION: With the intention of reducing the treatment frequency of Developmental Dysplasia of the Hip (DDH), two hospitals in Copenhagen implemented a screening and treatment procedure based on selective referral to ultrasonography of the hip (US). This paper describes and evaluates...... 0.03%. No relationship was seen between morphological parameters at the first US and the outcome of hips classified as minor dysplastic or not fully developed (NFD). A statistically significant relationship was seen between the degree of dysplasia and the time until US normalization of the hips (p......= 0.02). There was no relapse of dysplasia after treatment. The median duration of treatment was six, eight and nine weeks for mild, moderate and severe dysplasia respectively. CONCLUSION: The procedure resulted in a low rate of treatment and a small number of late diagnosed cases. Prediction...

  13. Generalizability of a composite student selection procedure at a university-based chiropractic program

    DEFF Research Database (Denmark)

    O'Neill, Lotte Dyhrberg; Korsholm, Lars; Wallstedt, Birgitta;

    2009-01-01

    PURPOSE: Non-cognitive admission criteria are typically used in chiropractic student selection to supplement grades. The reliability of non-cognitive student admission criteria in chiropractic education has not previously been examined. In addition, very few studies have examined the overall test...... generalizability of composites of non-cognitive admission variables in admission to health science programs. The aim of this study was to estimate the generalizability of a composite selection to a chiropractic program, consisting of: application form information, a written motivational essay, a common knowledge...... test, and an admission interview. METHODS: Data from 105 Chiropractic applicants from the 2007 admission at the University of Southern Denmark were available for analysis. Each admission parameter was double scored using two random, blinded, and independent raters. Variance components for applicant...

  14. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  15. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  16. Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.

    Science.gov (United States)

    Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh

    2014-07-01

    This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management.

  17. [The emphases and basic procedures of genetic counseling in psychotherapeutic model].

    Science.gov (United States)

    Zhang, Yuan-Zhi; Zhong, Nanbert

    2006-11-01

    The emphases and basic procedures of genetic counseling are all different with those in old models. In the psychotherapeutic model, genetic counseling will not only focus on counselees' genetic disorders and birth defects, but also their psychological problems. "Client-centered therapy" termed by Carl Rogers plays an important role in genetic counseling process. The basic procedures of psychotherapeutic model of genetic counseling include 7 steps: initial contact, introduction, agendas, inquiry of family history, presenting information, closing the session and follow-up.

  18. Procedural Factors That Affect Psychophysical Measures of Spatial Selectivity in Cochlear Implant Users

    Directory of Open Access Journals (Sweden)

    Stefano Cosentino

    2015-09-01

    Full Text Available Behavioral measures of spatial selectivity in cochlear implants are important both for guiding the programing of individual users’ implants and for the evaluation of different stimulation methods. However, the methods used are subject to a number of confounding factors that can contaminate estimates of spatial selectivity. These factors include off-site listening, charge interactions between masker and probe pulses in interleaved masking paradigms, and confusion effects in forward masking. We review the effects of these confounds and discuss methods for minimizing them. We describe one such method in which the level of a 125-pps masker is adjusted so as to mask a 125-pps probe, and where the masker and probe pulses are temporally interleaved. Five experiments describe the method and evaluate the potential roles of the different potential confounding factors. No evidence was obtained for off-site listening of the type observed in acoustic hearing. The choice of the masking paradigm was shown to alter the measured spatial selectivity. For short gaps between masker and probe pulses, both facilitation and refractory mechanisms had an effect on masking; this finding should inform the choice of stimulation rate in interleaved masking experiments. No evidence for confusion effects in forward masking was revealed. It is concluded that the proposed method avoids many potential confounds but that the choice of method should depend on the research question under investigation.

  19. Selection of Yeasts as Starter Cultures for Table Olives: a Step-by-Step Procedure

    Directory of Open Access Journals (Sweden)

    Antonio eBevilacqua

    2012-05-01

    Full Text Available In the past yeasts were traditionally regarded as the spoiling microorganisms for table olives; however, their role and impact for product quality and for the correct course of fermentation has been revised and nowadays many authors suggest a controlled inoculum of yeasts both in alkali-treated and untreated olives.The selection of a starter is a complex process, involving different steps. After strain isolation from raw material, the first step is the identification through phenotyping and genotypic methods; then, isolates should be characterized to assess their GRAS (generally recognized as safe status and technological properties (growth with salt added, at various temperatures and pHs, pectolytic and xylanolytic activity, lipolytic activity, resistance to some preservatives, functional impact.After studying these properties, the results can be submitted to data analysis (many times a statistical multivariate approach and strain selection. The number of strains to be selected depends on several factors, above all on the main goal: obtaining a single or a multiple-strain starter.After this step, starter should be used for a pilot fermentation on a lab scale, highlighting its performances, limits and benefits, as well as all the issues related to its production, storage and stability throughout the time. Finally, starter optimization conducted on a lab scale should be verified on real conditions.

  20. Current approach to male infertility treatment: sperm selection procedure based on hyaluronic acid binding ability

    Directory of Open Access Journals (Sweden)

    A. V. Zobova

    2015-01-01

    Full Text Available Intracytoplasmic sperm injection into an oocyte is widely used throughout the world in assisted reproductive technologies programs in the presence of male infertility factor. However, this approach can allow selection of a single sperm, which is carrying different types of pathologies. Minimizing of any potential risks, entailing the occurrence of abnormalities in the embryos development (apoptosis, fragmentation of embryos, alterations in gene expression, aneuploidies is a very important condition for reducing the potential negative consequences resulting the manipulation with gametes. Processes that could be influenced by the embryologist must be fulfilled in safe and physiological way as much as it is possible. Data of numerous publications reporting about the positive effects of using the technology of sperm selection by hyaluronic acid binding, let make a conclusion about the high prospects of this approach in the treatment of male infertility by methods of in vitro fertilization. The selection of sperm with improved characteristics, which determine the maturity and genetic integrity, provides an opportunity to improve the parameters of pre-implantation embryogenesis, having thus a positive effect on clinical outcomes of assisted reproductive technologies programs.

  1. A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling.

    Science.gov (United States)

    Kuprat, A P; Kabilan, S; Carson, J P; Corley, R A; Einstein, D R

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton's Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple sets

  2. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Science.gov (United States)

    Kuprat, A. P.; Kabilan, S.; Carson, J. P.; Corley, R. A.; Einstein, D. R.

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton's method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  3. Cardinality constrained portfolio selection via factor models

    OpenAIRE

    Monge, Juan Francisco

    2017-01-01

    In this paper we propose and discuss different 0-1 linear models in order to solve the cardinality constrained portfolio problem by using factor models. Factor models are used to build portfolios to track indexes, together with other objectives, also need a smaller number of parameters to estimate than the classical Markowitz model. The addition of the cardinality constraints limits the number of securities in the portfolio. Restricting the number of securities in the portfolio allows us to o...

  4. Reliability assessment of a manual-based procedure towards learning curve modeling and fmea analysis

    Directory of Open Access Journals (Sweden)

    Gustavo Rech

    2013-03-01

    Full Text Available Separation procedures in drug Distribution Centers (DC are manual-based activities prone to failures such as shipping exchanged, expired or broken drugs to the customer. Two interventions seem as promising in improving the reliability in the separation procedure: (i selection and allocation of appropriate operators to the procedure, and (ii analysis of potential failure modes incurred by selected operators. This article integrates Learning Curves (LC and FMEA (Failure Mode and Effect Analysis aimed at reducing the occurrence of failures in the manual separation of a drug DC. LCs parameters enable generating an index to identify the recommended operators to perform the procedures. The FMEA is then applied to the separation procedure carried out by the selected operators in order to identify failure modes. It also deployed the traditional FMEA severity index into two sub-indexes related to financial issues and damage to company´s image in order to characterize failures severity. When applied to a drug DC, the proposed method significantly reduced the frequency and severity of failures in the separation procedure.

  5. A review of mechanisms and modelling procedures for landslide tsunamis

    Science.gov (United States)

    Løvholt, Finn; Harbitz, Carl B.; Glimsdal, Sylfest

    2017-04-01

    Landslides, including volcano flank collapses or volcanically induced flows, constitute the second-most important cause of tsunamis after earthquakes. Compared to earthquakes, landslides are more diverse with respect to how they generation tsunamis. Here, we give an overview over the main tsunami generation mechanisms for landslide tsunamis. In the presentation, a mix of results using analytical models, numerical models, laboratory experiments, and case studies are used to illustrate the diversity, but also to point out some common characteristics. Different numerical modelling techniques for the landslide evolution, and the tsunami generation and propagation, as well as the effect of frequency dispersion, are also briefly discussed. Basic tsunami generation mechanisms for different types of landslides, including large submarine translational landslide, to impulsive submarine slumps, and violent subaerial landslides and volcano flank collapses, are reviewed. The importance of the landslide kinematics is given attention, including the interplay between landslide acceleration, landslide velocity to depth ratio (Froude number) and dimensions. Using numerical simulations, we demonstrate how landslide deformation and retrogressive failure development influence tsunamigenesis. Generation mechanisms for subaerial landslides, are reviewed by means of scaling relations from laboratory experiments and numerical modelling. Finally, it is demonstrated how the different degree of complexity in the landslide tsunamigenesis needs to be reflected by increased sophistication in numerical models.

  6. Evidence accumulation as a model for lexical selection

    NARCIS (Netherlands)

    Anders, R.; Riès, S.; van Maanen, L.; Alario, F.-X.

    2015-01-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of

  7. Optimal control of CPR procedure using hemodynamic circulation model

    Science.gov (United States)

    Lenhart, Suzanne M.; Protopopescu, Vladimir A.; Jung, Eunok

    2007-12-25

    A method for determining a chest pressure profile for cardiopulmonary resuscitation (CPR) includes the steps of representing a hemodynamic circulation model based on a plurality of difference equations for a patient, applying an optimal control (OC) algorithm to the circulation model, and determining a chest pressure profile. The chest pressure profile defines a timing pattern of externally applied pressure to a chest of the patient to maximize blood flow through the patient. A CPR device includes a chest compressor, a controller communicably connected to the chest compressor, and a computer communicably connected to the controller. The computer determines the chest pressure profile by applying an OC algorithm to a hemodynamic circulation model based on the plurality of difference equations.

  8. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    Science.gov (United States)

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  9. A Review of Different Estimation Procedures in the Rasch Model. Research Report 87-6.

    Science.gov (United States)

    Engelen, R. J. H.

    A short review of the different estimation procedures that have been used in association with the Rasch model is provided. These procedures include joint, conditional, and marginal maximum likelihood methods; Bayesian methods; minimum chi-square methods; and paired comparison estimation. A comparison of the marginal maximum likelihood estimation…

  10. A Connectionist Model of Stimulus Class Formation with a Yes/No Procedure and Compound Stimuli

    Science.gov (United States)

    Tovar, Angel E.; Chavez, Alvaro Torres

    2012-01-01

    We analyzed stimulus class formation in a human study and in a connectionist model (CM) with a yes/no procedure, using compound stimuli. In the human study, the participants were six female undergraduate students; the CM was a feed-forward back-propagation network. Two 3-member stimulus classes were trained with a similar procedure in both the…

  11. TSCALE: A New Multidimensional Scaling Procedure Based on Tversky's Contrast Model.

    Science.gov (United States)

    DeSarbo, Wayne S.; And Others

    1992-01-01

    TSCALE, a multidimensional scaling procedure based on the contrast model of A. Tversky for asymmetric three-way, two-mode proximity data, is presented. TSCALE conceptualizes a latent dimensional structure to describe the judgmental stimuli. A Monte Carlo analysis and two consumer psychology applications illustrate the procedure. (SLD)

  12. Communication and Procedural Models of the E-Commerce Systems

    OpenAIRE

    Suchánek, Petr

    2009-01-01

    E-commerce systems became a standard interface between sellers (or suppliers) and customers. One of basic condition of an e-commerce system to be efficient is correct definitions and describes of the all internal and external processes. All is targeted the customers´ needs and requirements. The optimal and most exact way how to obtain and find optimal solution of e-commerce system and its processes structure in companies is the modeling and simulation. In this article author shows basic model...

  13. Communication and Procedural Models of the E-commerce Systems

    OpenAIRE

    Suchánek, Petr

    2009-01-01

    E-commerce systems became a standard interface between sellers (or suppliers) and customers. One of basic condition of an e-commerce system to be efficient is correct definitions and describes of the all internal and external processes. All is targeted the customers´ needs and requirements. The optimal and most exact way how to obtain and find optimal solution of e-commerce system and its processes structure in companies is the modeling and simulation. In this article author shows basic model...

  14. Literature Evidence on Live Animal Versus Synthetic Models for Training and Assessing Trauma Resuscitation Procedures.

    Science.gov (United States)

    Hart, Danielle; McNeil, Mary Ann; Hegarty, Cullen; Rush, Robert; Chipman, Jeffery; Clinton, Joseph; Reihsen, Troy; Sweet, Robert

    2016-01-01

    There are many models currently used for teaching and assessing performance of trauma-related airway, breathing, and hemorrhage procedures. Although many programs use live animal (live tissue [LT]) models, there is a congressional effort to transition to the use of nonanimal- based methods (i.e., simulators, cadavers) for military trainees. We examined the existing literature and compared the efficacy, acceptability, and validity of available models with a focus on comparing LT models with synthetic systems. Literature and Internet searches were conducted to examine current models for seven core trauma procedures. We identified 185 simulator systems. Evidence on acceptability and validity of models was sparse. We found only one underpowered study comparing the performance of learners after training on LT versus simulator models for tube thoracostomy and cricothyrotomy. There is insufficient data-driven evidence to distinguish superior validity of LT or any other model for training or assessment of critical trauma procedures.

  15. Extraction procedure testing of solid wastes generated at selected metal ore mines and mills

    Science.gov (United States)

    Harty, David M.; Terlecky, P. Michael

    1986-09-01

    Solid waste samples from a reconnaissance study conducted at ore mining and milling sites were subjected to the U.S. Environmental Protection Agency extraction procedure (EP) leaching test Sites visited included mines and mills extracting ores of antimony (Sb), mercury (Hg), vanadium (V), tungsten (W), and nickel (Ni). Samples analyzed included mine wastes, treatment pond solids, tailings, low grade ore, and other solid wastes generated at these facilities Analysis of the leachate from these tests indicates that none of the samples generated leachate in which the concentration of any toxic metal parameter exceeded EPA criteria levels for those metals. By volume, tailings generally constitute the largest amount of solid wastes generated, but these data indicate that with proper management and monitoring, current EPA criteria can be met for tailings and for most solid wastes associated with mining and milling of these metal ores. Long-term studies are needed to determine if leachate characteristics change with time and to assist in development of closure plans and post closure monitoring programs.

  16. A general U-block model-based design procedure for nonlinear polynomial control systems

    Science.gov (United States)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  17. WEMo (Wave Exposure Model): Formulation, Procedures and Validation

    OpenAIRE

    Malhotra, Amit; Mark S. Fonseca

    2007-01-01

    This report describes the working of National Centers for Coastal Ocean Service (NCCOS) Wave Exposure Model (WEMo) capable of predicting the exposure of a site in estuarine and closed water to local wind generated waves. WEMo works in two different modes: the Representative Wave Energy (RWE) mode calculates the exposure using physical parameters like wave energy and wave height, while the Relative Exposure Index (REI) empirically calculates exposure as a unitless index. Detailed working of th...

  18. Comparison of Estimation Procedures for Multilevel AR(1 Models

    Directory of Open Access Journals (Sweden)

    Tanja eKrone

    2016-04-01

    Full Text Available To estimate a time series model for multiple individuals, a multilevel model may be used.In this paper we compare two estimation methods for the autocorrelation in Multilevel AR(1 models, namely Maximum Likelihood Estimation (MLE and Bayesian Markov Chain Monte Carlo.Furthermore, we examine the difference between modeling fixed and random individual parameters.To this end, we perform a simulation study with a fully crossed design, in which we vary the length of the time series (10 or 25, the number of individuals per sample (10 or 25, the mean of the autocorrelation (-0.6 to 0.6 inclusive, in steps of 0.3 and the standard deviation of the autocorrelation (0.25 or 0.40.We found that the random estimators of the population autocorrelation show less bias and higher power, compared to the fixed estimators. As expected, the random estimators profit strongly from a higher number of individuals, while this effect is small for the fixed estimators.The fixed estimators profit slightly more from a higher number of time points than the random estimators.When possible, random estimation is preferred to fixed estimation.The difference between MLE and Bayesian estimation is nearly negligible. The Bayesian estimation shows a smaller bias, but MLE shows a smaller variability (i.e., standard deviation of the parameter estimates.Finally, better results are found for a higher number of individuals and time points, and for a lower individual variability of the autocorrelation. The effect of the size of the autocorrelation differs between outcome measures.

  19. A baseline-free procedure for transformation models under interval censorship.

    Science.gov (United States)

    Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin

    2005-12-01

    An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.

  20. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...

  1. Role of maturity timing in selection procedures and in the specialisation of playing positions in youth basketball.

    Science.gov (United States)

    te Wierike, Sanne Cornelia Maria; Elferink-Gemser, Marije Titia; Tromp, Eveline Jenny Yvonne; Vaeyens, Roel; Visscher, Chris

    2015-01-01

    This study investigated the role of maturity timing in selection procedures and in the specialisation of playing positions in youth male basketball. Forty-three talented Dutch players (14.66 ± 1.09 years) participated in this study. Maturity timing (age at peak height velocity), anthropometric, physiological, and technical characteristics were measured. Maturity timing and height of the basketball players were compared with a matched Dutch population. One-sample t-tests showed that basketball players were taller and experienced their peak height velocity at an earlier age compared to their peers, which indicates the relation between maturity timing and selection procedures. Multivariate analysis of variance (MANOVA) showed that guards experienced their peak height velocity at a later age compared to forwards and centres (P < .01). In addition, positional differences were found for height, sitting height, leg length, body mass, lean body mass, sprint, lower body explosive strength, and dribble (P < .05). Multivariate analysis of covariance (MANCOVA) (age and age at peak height velocity as covariate) showed only a significant difference regarding the technical characteristic dribbling (P < .05). Coaches and trainers should be aware of the inter-individual differences between boys related to their maturity timing. Since technical characteristics appeared to be least influenced by maturity timing, it is recommended to focus more on technical characteristics rather than anthropometric and physiological characteristics.

  2. Loop electrosurgical excision procedure: an effective, inexpensive, and durable teaching model.

    Science.gov (United States)

    Connor, R Shae; Dizon, A Mitch; Kimball, Kristopher J

    2014-12-01

    The effectiveness of simulation training for enhancing operative skills is well established. Here we describe the construction of a simple, low-cost model for teaching the loop electrosurgical excision procedure. Composed of common materials such as polyvinyl chloride pipe and sausages, the simulation model, shown in the accompanying figure, can be easily reproduced by other training programs. In addition, we also present an instructional video that utilizes this model to review loop electrosurgical excision procedure techniques, highlighting important steps in the procedure and briefly addressing challenging situations and common mistakes as well as strategies to prevent them. The video and model can be used in conjunction with a simulation skills laboratory to teach the procedure to students, residents, and new practitioners.

  3. Continuous time limits of the Utterance Selection Model

    CERN Document Server

    Michaud, Jérôme

    2016-01-01

    In this paper, we derive new continuous time limits of the Utterance Selection Model (USM) for language change (Baxter et al., Phys. Rev. E {\\bf 73}, 046118, 2006). This is motivated by the fact that the Fokker-Planck continuous time limit derived in the original version of the USM is only valid for a small range range of parameters. We investigate the consequences of relaxing these constraints on parameters. Using the normal approximation of the multinomial approximation, we derive a new continuous time limit of the USM in the form of a weak-noise stochastic differential equation. We argue that this weak noise, not captured by the Kramers-Moyal expansion, can not be neglected. We then propose a coarse-graining procedure, which takes the form of a stochastic version of the \\emph{heterogeneous mean field} approximation. This approximation groups the behaviour of nodes of same degree, reducing the complexity of the problem. With the help of this approximation, we study in detail two simple families of networks:...

  4. Estimating seabed scattering mechanisms via Bayesian model selection.

    Science.gov (United States)

    Steininger, Gavin; Dosso, Stan E; Holland, Charles W; Dettmer, Jan

    2014-10-01

    A quantitative inversion procedure is developed and applied to determine the dominant scattering mechanism (surface roughness and/or volume scattering) from seabed scattering-strength data. The classification system is based on trans-dimensional Bayesian inversion with the deviance information criterion used to select the dominant scattering mechanism. Scattering is modeled using first-order perturbation theory as due to one of three mechanisms: Interface scattering from a rough seafloor, volume scattering from a heterogeneous sediment layer, or mixed scattering combining both interface and volume scattering. The classification system is applied to six simulated test cases where it correctly identifies the true dominant scattering mechanism as having greater support from the data in five cases; the remaining case is indecisive. The approach is also applied to measured backscatter-strength data where volume scattering is determined as the dominant scattering mechanism. Comparison of inversion results with core data indicates the method yields both a reasonable volume heterogeneity size distribution and a good estimate of the sub-bottom depths at which scatterers occur.

  5. Two-Phase Item Selection Procedure for Flexible Content Balancing in CAT

    Science.gov (United States)

    Cheng, Ying; Chang, Hua-Hua; Yi, Qing

    2007-01-01

    Content balancing is an important issue in the design and implementation of computerized adaptive testing (CAT). Content-balancing techniques that have been applied in fixed content balancing, where the number of items from each content area is fixed, include constrained CAT (CCAT), the modified multinomial model (MMM), modified constrained CAT…

  6. Astrophysical Model Selection in Gravitational Wave Astronomy

    Science.gov (United States)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  7. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  8. Using multilevel models to quantify heterogeneity in resource selection

    Science.gov (United States)

    Wagner, T.; Diefenbach, D.R.; Christensen, S.A.; Norton, A.S.

    2011-01-01

    Models of resource selection are being used increasingly to predict or model the effects of management actions rather than simply quantifying habitat selection. Multilevel, or hierarchical, models are an increasingly popular method to analyze animal resource selection because they impose a relatively weak stochastic constraint to model heterogeneity in habitat use and also account for unequal sample sizes among individuals. However, few studies have used multilevel models to model coefficients as a function of predictors that may influence habitat use at different scales or quantify differences in resource selection among groups. We used an example with white-tailed deer (Odocoileus virginianus) to illustrate how to model resource use as a function of distance to road that varies among deer by road density at the home range scale. We found that deer avoidance of roads decreased as road density increased. Also, we used multilevel models with sika deer (Cervus nippon) and white-tailed deer to examine whether resource selection differed between species. We failed to detect differences in resource use between these two species and showed how information-theoretic and graphical measures can be used to assess how resource use may have differed. Multilevel models can improve our understanding of how resource selection varies among individuals and provides an objective, quantifiable approach to assess differences or changes in resource selection. ?? The Wildlife Society, 2011.

  9. New Procedure to Develop Lumped Kinetic Models for Heavy Fuel Oil Combustion

    KAUST Repository

    Han, Yunqing

    2016-09-20

    A new procedure to develop accurate lumped kinetic models for complex fuels is proposed, and applied to the experimental data of the heavy fuel oil measured by thermogravimetry. The new procedure is based on the pseudocomponents representing different reaction stages, which are determined by a systematic optimization process to ensure that the separation of different reaction stages with highest accuracy. The procedure is implemented and the model prediction was compared against that from a conventional method, yielding a significantly improved agreement with the experimental data. © 2016 American Chemical Society.

  10. Python Program to Select HII Region Models

    Science.gov (United States)

    Miller, Clare; Lamarche, Cody; Vishwas, Amit; Stacey, Gordon J.

    2016-01-01

    HII regions are areas of singly ionized Hydrogen formed by the ionizing radiaiton of upper main sequence stars. The infrared fine-structure line emissions, particularly Oxygen, Nitrogen, and Neon, can give important information about HII regions including gas temperature and density, elemental abundances, and the effective temperature of the stars that form them. The processes involved in calculating this information from observational data are complex. Models, such as those provided in Rubin 1984 and those produced by Cloudy (Ferland et al, 2013) enable one to extract physical parameters from observational data. However, the multitude of search parameters can make sifting through models tedious. I digitized Rubin's models and wrote a Python program that is able to take observed line ratios and their uncertainties and find the Rubin or Cloudy model that best matches the observational data. By creating a Python script that is user friendly and able to quickly sort through models with a high level of accuracy, this work increases efficiency and reduces human error in matching HII region models to observational data.

  11. A Multi-objective model for selection of projects to finance new enterprise SMEs in Colombia

    Directory of Open Access Journals (Sweden)

    J.R. Coronado-Hernández

    2011-10-01

    Full Text Available Purpose: This paper presents a multi-objective programming model for selection of Projects for Financing New Enterprise SMEs in Colombia with objectivity and transparency in every call. Approach: The model has four social objectives, subject to constraint budget and to the requirements of every summons. The resolution procedure for the model is based on principles of goal programming. Findings: Selection projects subject to the impact within the country. Research limitations: The selection of the projects is restricted by a legal framework, the terms of reference and the budget of the summons. Practical implications: The projects must be viable according to the characteristics of every summons. Originality/value: The suggested model offers an alternative for entities that need to evaluate projects of co-financing for the managerial development of the SMEs with more objectivity and transparency in the assignment of resources.

  12. Predicting the Best Fit: A Comparison of Response Surface Models for Midazolam and Alfentanil Sedation in Procedures With Varying Stimulation.

    Science.gov (United States)

    Liou, Jing-Yang; Ting, Chien-Kun; Mandell, M Susan; Chang, Kuang-Yi; Teng, Wei-Nung; Huang, Yu-Yin; Tsou, Mei-Yung

    2016-08-01

    Selecting an effective dose of sedative drugs in combined upper and lower gastrointestinal endoscopy is complicated by varying degrees of pain stimulation. We tested the ability of 5 response surface models to predict depth of sedation after administration of midazolam and alfentanil in this complex model. The procedure was divided into 3 phases: esophagogastroduodenoscopy (EGD), colonoscopy, and the time interval between the 2 (intersession). The depth of sedation in 33 adult patients was monitored by Observer Assessment of Alertness/Scores. A total of 218 combinations of midazolam and alfentanil effect-site concentrations derived from pharmacokinetic models were used to test 5 response surface models in each of the 3 phases of endoscopy. Model fit was evaluated with objective function value, corrected Akaike Information Criterion (AICc), and Spearman ranked correlation. A model was arbitrarily defined as accurate if the predicted probability is fit. The reduced Greco model had the lowest objective function value and AICc and thus the best fit. This model was reliable with acceptable predictive ability based on adequate clinical correlation. We suggest that this model has practical clinical value for patients undergoing procedures with varying degrees of stimulation.

  13. Modelling dental implant extraction by pullout and torque procedures.

    Science.gov (United States)

    Rittel, D; Dorogoy, A; Shemtov-Yona, K

    2017-07-01

    Dental implants extraction, achieved either by applying torque or pullout force, is used to estimate the bone-implant interfacial strength. A detailed description of the mechanical and physical aspects of the extraction process in the literature is still missing. This paper presents 3D nonlinear dynamic finite element simulations of a commercial implant extraction process from the mandible bone. Emphasis is put on the typical load-displacement and torque-angle relationships for various types of cortical and trabecular bone strengths. The simulations also study of the influence of the osseointegration level on those relationships. This is done by simulating implant extraction right after insertion when interfacial frictional contact exists between the implant and bone, and long after insertion, assuming that the implant is fully bonded to the bone. The model does not include a separate representation and model of the interfacial layer for which available data is limited. The obtained relationships show that the higher the strength of the trabecular bone the higher the peak extraction force, while for application of torque, it is the cortical bone which might dictate the peak torque value. Information on the relative strength contrast of the cortical and trabecular components, as well as the progressive nature of the damage evolution, can be revealed from the obtained relations. It is shown that full osseointegration might multiply the peak and average load values by a factor 3-12 although the calculated work of extraction varies only by a factor of 1.5. From a quantitative point of view, it is suggested that, as an alternative to reporting peak load or torque values, an average value derived from the extraction work be used to better characterize the bone-implant interfacial strength. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  15. MODELING IN MAPLE AS THE RESEARCHING MEANS OF FUNDAMENTAL CONCEPTS AND PROCEDURES IN LINEAR ALGEBRA

    Directory of Open Access Journals (Sweden)

    Vasil Kushnir

    2016-05-01

    -th degree of a square matrix, to calculate matrix exponent, etc. The author creates four basic forms of canonical models of matrices and shows how to design matrices of similar transformations to these four forms. We introduce the programs-procedures for square matrices construction based on the selected models of canonical matrices. Then you can create a certain amount of various square matrices based on canonical matrix models, it allows to use individual learning technologies. The use of Maple-technology allows to automate the cumbersome and complex procedures for finding the transformation matrices of canonical form of a matrix, values of matrices functions, etc., which not only saves time but also attracts attention and efforts on understanding the above mentioned fundamental concepts of linear algebra and procedures for investigation of their properties. All these create favorable conditions for the use of fundamental concepts of linear algebra in scientific and research work of students and undergraduates using Maple-technology

  16. Bayesian Model Selection for LISA Pathfinder

    CERN Document Server

    Karnesis, Nikolaos; Sopuerta, Carlos F; Gibert, Ferran; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Ferraioli, Luigi; Hewitson, Martin; Hueller, Mauro; Korsakova, Natalia; Plagnol, Eric; Vitale, and Stefano

    2013-01-01

    The main goal of the LISA Pathfinder (LPF) mission is to fully characterize the acceleration noise models and to test key technologies for future space-based gravitational-wave observatories similar to the LISA/eLISA concept. The Data Analysis (DA) team has developed complex three-dimensional models of the LISA Technology Package (LTP) experiment on-board LPF. These models are used for simulations, but more importantly, they will be used for parameter estimation purposes during flight operations. One of the tasks of the DA team is to identify the physical effects that contribute significantly to the properties of the instrument noise. A way of approaching to this problem is to recover the essential parameters of the LTP which describe the data. Thus, we want to define the simplest model that efficiently explains the observations. To do so, adopting a Bayesian framework, one has to estimate the so-called Bayes Factor between two competing models. In our analysis, we use three main different methods to estimate...

  17. NEW EFFICIENT ESTIMATION AND VARIABLE SELECTION METHODS FOR SEMIPARAMETRIC VARYING-COEFFICIENT PARTIALLY LINEAR MODELS.

    Science.gov (United States)

    Kai, Bo; Li, Runze; Zou, Hui

    2011-02-01

    The complexity of semiparametric models poses new challenges to statistical inference and model selection that frequently arise from real applications. In this work, we propose new estimation and variable selection procedures for the semiparametric varying-coefficient partially linear model. We first study quantile regression estimates for the nonparametric varying-coefficient functions and the parametric regression coefficients. To achieve nice efficiency properties, we further develop a semiparametric composite quantile regression procedure. We establish the asymptotic normality of proposed estimators for both the parametric and nonparametric parts and show that the estimators achieve the best convergence rate. Moreover, we show that the proposed method is much more efficient than the least-squares-based method for many non-normal errors and that it only loses a small amount of efficiency for normal errors. In addition, it is shown that the loss in efficiency is at most 11.1% for estimating varying coefficient functions and is no greater than 13.6% for estimating parametric components. To achieve sparsity with high-dimensional covariates, we propose adaptive penalization methods for variable selection in the semiparametric varying-coefficient partially linear model and prove that the methods possess the oracle property. Extensive Monte Carlo simulation studies are conducted to examine the finite-sample performance of the proposed procedures. Finally, we apply the new methods to analyze the plasma beta-carotene level data.

  18. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  19. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  20. Development of SPAWM: selection program for available watershed models.

    Science.gov (United States)

    Cho, Yongdeok; Roesner, Larry A

    2014-01-01

    A selection program for available watershed models (also known as SPAWM) was developed. Thirty-three commonly used watershed models were analyzed in depth and classified in accordance to their attributes. These attributes consist of: (1) land use; (2) event or continuous; (3) time steps; (4) water quality; (5) distributed or lumped; (6) subsurface; (7) overland sediment; and (8) best management practices. Each of these attributes was further classified into sub-attributes. Based on user selected sub-attributes, the most appropriate watershed model is selected from the library of watershed models. SPAWM is implemented using Excel Visual Basic and is designed for use by novices as well as by experts on watershed modeling. It ensures that the necessary sub-attributes required by the user are captured and made available in the selected watershed model.

  1. Boosting model performance and interpretation by entangling preprocessing selection and variable selection.

    Science.gov (United States)

    Gerretzen, Jan; Szymańska, Ewa; Bart, Jacob; Davies, Antony N; van Manen, Henk-Jan; van den Heuvel, Edwin R; Jansen, Jeroen J; Buydens, Lutgarde M C

    2016-09-28

    The aim of data preprocessing is to remove data artifacts-such as a baseline, scatter effects or noise-and to enhance the contextually relevant information. Many preprocessing methods exist to deliver one or more of these benefits, but which method or combination of methods should be used for the specific data being analyzed is difficult to select. Recently, we have shown that a preprocessing selection approach based on Design of Experiments (DoE) enables correct selection of highly appropriate preprocessing strategies within reasonable time frames. In that approach, the focus was solely on improving the predictive performance of the chemometric model. This is, however, only one of the two relevant criteria in modeling: interpretation of the model results can be just as important. Variable selection is often used to achieve such interpretation. Data artifacts, however, may hamper proper variable selection by masking the true relevant variables. The choice of preprocessing therefore has a huge impact on the outcome of variable selection methods and may thus hamper an objective interpretation of the final model. To enhance such objective interpretation, we here integrate variable selection into the preprocessing selection approach that is based on DoE. We show that the entanglement of preprocessing selection and variable selection not only improves the interpretation, but also the predictive performance of the model. This is achieved by analyzing several experimental data sets of which the true relevant variables are available as prior knowledge. We show that a selection of variables is provided that complies more with the true informative variables compared to individual optimization of both model aspects. Importantly, the approach presented in this work is generic. Different types of models (e.g. PCR, PLS, …) can be incorporated into it, as well as different variable selection methods and different preprocessing methods, according to the taste and experience of

  2. Quantile hydrologic model selection and model structure deficiency assessment: 2. Applications

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    Quantile hydrologic model selection and structure deficiency assessment is applied in three case studies. The performance of quantile model selection problem is rigorously evaluated using a model structure on the French Broad river basin data set. The case study shows that quantile model selection

  3. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  4. Adapting AIC to conditional model selection

    NARCIS (Netherlands)

    M. van Ommen (Matthijs)

    2012-01-01

    textabstractIn statistical settings such as regression and time series, we can condition on observed information when predicting the data of interest. For example, a regression model explains the dependent variables $y_1, \\ldots, y_n$ in terms of the independent variables $x_1, \\ldots, x_n$.

  5. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn;

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

  6. A Decision Model for Selecting Participants in Supply Chain

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In order to satisfy the rapid changing requirements of customers, enterprises must cooperate with each other to form supply chain. The first and the most important stage in the forming of supply chain is the selection of participants. The article proposes a two-staged decision model to select partners. The first stage is the inter company comparison in each business process to select highefficiency candidate based on inside variables. The next stage is to analyse the combination of different candidates in order to select the most perfect partners according to a goal-programming model.

  7. Model selection in systems biology depends on experimental design.

    Science.gov (United States)

    Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H

    2014-06-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.

  8. Bootstrap rank-ordered conditional mutual information (broCMI): A nonlinear input variable selection method for water resources modeling

    Science.gov (United States)

    Quilty, John; Adamowski, Jan; Khalil, Bahaa; Rathinasamy, Maheswaran

    2016-03-01

    The input variable selection problem has recently garnered much interest in the time series modeling community, especially within water resources applications, demonstrating that information theoretic (nonlinear)-based input variable selection algorithms such as partial mutual information (PMI) selection (PMIS) provide an improved representation of the modeled process when compared to linear alternatives such as partial correlation input selection (PCIS). PMIS is a popular algorithm for water resources modeling problems considering nonlinear input variable selection; however, this method requires the specification of two nonlinear regression models, each with parametric settings that greatly influence the selected input variables. Other attempts to develop input variable selection methods using conditional mutual information (CMI) (an analog to PMI) have been formulated under different parametric pretenses such as k nearest-neighbor (KNN) statistics or kernel density estimates (KDE). In this paper, we introduce a new input variable selection method based on CMI that uses a nonparametric multivariate continuous probability estimator based on Edgeworth approximations (EA). We improve the EA method by considering the uncertainty in the input variable selection procedure by introducing a bootstrap resampling procedure that uses rank statistics to order the selected input sets; we name our proposed method bootstrap rank-ordered CMI (broCMI). We demonstrate the superior performance of broCMI when compared to CMI-based alternatives (EA, KDE, and KNN), PMIS, and PCIS input variable selection algorithms on a set of seven synthetic test problems and a real-world urban water demand (UWD) forecasting experiment in Ottawa, Canada.

  9. Modeling HIV-1 drug resistance as episodic directional selection.

    Directory of Open Access Journals (Sweden)

    Ben Murrell

    Full Text Available The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  10. Using Video Modeling with Voiceover Instruction Plus Feedback to Train Staff to Implement Direct Teaching Procedures.

    Science.gov (United States)

    Giannakakos, Antonia R; Vladescu, Jason C; Kisamore, April N; Reeve, Sharon A

    2016-06-01

    Direct teaching procedures are often an important part of early intensive behavioral intervention for consumers with autism spectrum disorder. In the present study, a video model with voiceover (VMVO) instruction plus feedback was evaluated to train three staff trainees to implement a most-to-least direct (MTL) teaching procedure. Probes for generalization were conducted with untrained direct teaching procedures (i.e., least-to-most, prompt delay) and with an actual consumer. The results indicated that VMVO plus feedback was effective in training the staff trainees to implement the MTL procedure. Although additional feedback was required for the staff trainees to show mastery of the untrained direct teaching procedures (i.e., least-to-most and prompt delay) and with an actual consumer, moderate to high levels of generalization were observed.

  11. Asset pricing model selection: Indonesian Stock Exchange

    OpenAIRE

    Pasaribu, Rowland Bismark Fernando

    2010-01-01

    The Capital Asset Pricing Model (CAPM) has dominated finance theory for over thirty years; it suggests that the market beta alone is sufficient to explain stock returns. However evidence shows that the cross-section of stock returns cannot be described solely by the one-factor CAPM. Therefore, the idea is to add other factors in order to complete the beta in explaining the price movements in the stock exchange. The Arbitrage Pricing Theory (APT) has been proposed as the first multifactor succ...

  12. A mixed model reduction method for preserving selected physical information

    Science.gov (United States)

    Zhang, Jing; Zheng, Gangtie

    2017-03-01

    A new model reduction method in the frequency domain is presented. By mixedly using the model reduction techniques from both the time domain and the frequency domain, the dynamic model is condensed to selected physical coordinates, and the contribution of slave degrees of freedom is taken as a modification to the model in the form of effective modal mass of virtually constrained modes. The reduced model can preserve the physical information related to the selected physical coordinates such as physical parameters and physical space positions of corresponding structure components. For the cases of non-classical damping, the method is extended to the model reduction in the state space but still only contains the selected physical coordinates. Numerical results are presented to validate the method and show the effectiveness of the model reduction.

  13. Bias in the Listeria monocytogenes enrichment procedure: Lineage 2 strains outcompete lineage 1 strains in University of Vermont selective enrichments

    DEFF Research Database (Denmark)

    Bruhn, Jesper Bartholin; Vogel, Birte Fonnesbech; Gram, Lone

    2005-01-01

    Listeria monocytogenes can be isolated from a range of food products and may cause food-borne outbreaks or sporadic cases of listeriosis. L. monocytogenes is divided into three genetic lineages and 13 serotypes. Strains of three serotypes (1/2a, 1/2b, and 4b) are associated with most human cases...... of listeriosis. Of these, strains of serotypes 1/2b and 4b belong to lineage 1, whereas strains of serotype 1/2a and many other strains isolated from foods belong to lineage 2. L. monocytogenes is isolated from foods by selective enrichment procedures and from patients by nonselective methods. The aim......, as lineage 1 strains, which are often isolated from clinical cases of listeriosis, may be suppressed during enrichment by other L. monocytogenes lineages present in a food sample...

  14. Usefulness of a PARAFAC decomposition in the fiber selection procedure to determine chlorophenols by means SPME-GC-MS

    Energy Technology Data Exchange (ETDEWEB)

    Morales, Rocio; Ortiz, M.C. [University of Burgos, Department of Chemistry, Faculty of Sciences, Burgos (Spain); Sarabia, Luis A. [University of Burgos, Department of Mathematics and Computation, Faculty of Sciences, Burgos (Spain)

    2012-05-15

    In this work, a procedure based on solid-phase microextraction and gas chromatography coupled with mass spectrometry is proposed to determine chlorophenols in water without derivatization. The following chlorophenols are studied: 2,4-dichlorophenol; 2,4,6-trichlorophenol; 2,3,4,6-tetrachlorophenol and pentachlorophenol. Three kinds of SPME fibers, polyacrylate, polydimethylsiloxane, and polydimethylsiloxane/divinylbenzene are compared to identify the most suitable one for the extraction process on the basis of two criteria: (a) to select the equilibrium time studying the kinetics of the extraction, and (b) to obtain the best values of the figures of merit. In both cases, a three-way PARAllel FACtor analysis decomposition is used. For the first step, the three-way experimental data are arranged as follows: if I extraction times are considered, the tensor of data, X, of dimensions I x J x K is generated by concatenating the I matrices formed by the abundances of the J m/z ions recorded in K elution times around the retention time for each chlorophenol. The second-order property of PARAFAC (or PARAFAC2) assesses the unequivocal identification of each chlorophenol, as consequence, the loadings in the first mode estimated by the PARAFAC decomposition are the kinetic profile. For the second step, a calibration based on a PARAFAC decomposition is used for each fiber. The best figures of merit were obtained with PDMS/DVB fiber. The values of decision limit, CC{alpha}, achieved are between 0.29 and 0.67 {mu}g L{sup -1} for the four chlorophenols. The accuracy (trueness and precision) of the procedure was assessed. This procedure has been applied to river water samples. (orig.)

  15. Sensitivity Analysis to Select the Most Influential Risk Factors in a Logistic Regression Model

    Directory of Open Access Journals (Sweden)

    Jassim N. Hussain

    2008-01-01

    Full Text Available The traditional variable selection methods for survival data depend on iteration procedures, and control of this process assumes tuning parameters that are problematic and time consuming, especially if the models are complex and have a large number of risk factors. In this paper, we propose a new method based on the global sensitivity analysis (GSA to select the most influential risk factors. This contributes to simplification of the logistic regression model by excluding the irrelevant risk factors, thus eliminating the need to fit and evaluate a large number of models. Data from medical trials are suggested as a way to test the efficiency and capability of this method and as a way to simplify the model. This leads to construction of an appropriate model. The proposed method ranks the risk factors according to their importance.

  16. Selection of probability based weighting models for Boolean retrieval system

    Energy Technology Data Exchange (ETDEWEB)

    Ebinuma, Y. (Japan Atomic Energy Research Inst., Tokai, Ibaraki. Tokai Research Establishment)

    1981-09-01

    Automatic weighting models based on probability theory were studied if they can be applied to boolean search logics including logical sum. The INIS detabase was used for searching of one particular search formula. Among sixteen models three with good ranking performance were selected. These three models were further applied to searching of nine search formulas in the same database. It was found that two models among them show slightly better average ranking performance while the other model, the simplest one, seems also practical.

  17. Model Selection Through Sparse Maximum Likelihood Estimation

    CERN Document Server

    Banerjee, Onureena; D'Aspremont, Alexandre

    2007-01-01

    We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...

  18. Sensitivity of resource selection and connectivity models to landscape definition

    Science.gov (United States)

    Katherine A. Zeller; Kevin McGarigal; Samuel A. Cushman; Paul Beier; T. Winston Vickers; Walter M. Boyce

    2017-01-01

    Context: The definition of the geospatial landscape is the underlying basis for species-habitat models, yet sensitivity of habitat use inference, predicted probability surfaces, and connectivity models to landscape definition has received little attention. Objectives: We evaluated the sensitivity of resource selection and connectivity models to four landscape...

  19. A Working Model of Natural Selection Illustrated by Table Tennis

    Science.gov (United States)

    Dinc, Muhittin; Kilic, Selda; Aladag, Caner

    2013-01-01

    Natural selection is one of the most important topics in biology and it helps to clarify the variety and complexity of organisms. However, students in almost every stage of education find it difficult to understand the mechanism of natural selection and they can develop misconceptions about it. This article provides an active model of natural…

  20. Elementary Teachers' Selection and Use of Visual Models

    Science.gov (United States)

    Lee, Tammy D.; Gail Jones, M.

    2017-07-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  1. Fluctuating selection models and McDonald-Kreitman type analyses.

    Directory of Open Access Journals (Sweden)

    Toni I Gossmann

    Full Text Available It is likely that the strength of selection acting upon a mutation varies through time due to changes in the environment. However, most population genetic theory assumes that the strength of selection remains constant. Here we investigate the consequences of fluctuating selection pressures on the quantification of adaptive evolution using McDonald-Kreitman (MK style approaches. In agreement with previous work, we show that fluctuating selection can generate evidence of adaptive evolution even when the expected strength of selection on a mutation is zero. However, we also find that the mutations, which contribute to both polymorphism and divergence tend, on average, to be positively selected during their lifetime, under fluctuating selection models. This is because mutations that fluctuate, by chance, to positive selected values, tend to reach higher frequencies in the population than those that fluctuate towards negative values. Hence the evidence of positive adaptive evolution detected under a fluctuating selection model by MK type approaches is genuine since fixed mutations tend to be advantageous on average during their lifetime. Never-the-less we show that methods tend to underestimate the rate of adaptive evolution when selection fluctuates.

  2. The Optimal Portfolio Selection Model under g -Expectation

    National Research Council Canada - National Science Library

    Li Li

    2014-01-01

      This paper solves the optimal portfolio selection model under the framework of the prospect theory proposed by Kahneman and Tversky in the 1970s with decision rule replaced by the g -expectation introduced by Peng...

  3. Formulating "Principles of Procedure" for the Foreign Language Classroom: A Framework for Process Model Language Curricula

    Science.gov (United States)

    Villacañas de Castro, Luis S.

    2016-01-01

    This article aims to apply Stenhouse's process model of curriculum to foreign language (FL) education, a model which is characterized by enacting "principles of procedure" which are specific to the discipline which the school subject belongs to. Rather than to replace or dissolve current approaches to FL teaching and curriculum…

  4. Robust Decision-making Applied to Model Selection

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Laboratory

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  5. Information-theoretic model selection applied to supernovae data

    CERN Document Server

    Biesiada, M

    2007-01-01

    There are several different theoretical ideas invoked to explain the dark energy with relatively little guidance of which one of them might be right. Therefore the emphasis of ongoing and forthcoming research in this field shifts from estimating specific parameters of cosmological model to the model selection. In this paper we apply information-theoretic model selection approach based on Akaike criterion as an estimator of Kullback-Leibler entropy. In particular, we present the proper way of ranking the competing models based on Akaike weights (in Bayesian language - posterior probabilities of the models). Out of many particular models of dark energy we focus on four: quintessence, quintessence with time varying equation of state, brane-world and generalized Chaplygin gas model and test them on Riess' Gold sample. As a result we obtain that the best model - in terms of Akaike Criterion - is the quintessence model. The odds suggest that although there exist differences in the support given to specific scenario...

  6. Variable Selection in the Partially Linear Errors-in-Variables Models for Longitudinal Data

    Institute of Scientific and Technical Information of China (English)

    Yi-ping YANG; Liu-gen XUE; Wei-hu CHENG

    2012-01-01

    This paper proposes a new approach for variable selection in partially linear errors-in-variables (EV) models for longitudinal data by penalizing appropriate estimating functions.We apply the SCAD penalty to simultaneously select significant variables and estimate unknown parameters.The rate of convergence and the asymptotic normality of the resulting estimators are established.Furthermore,with proper choice of regularization parameters,we show that the proposed estimators perform as well as the oracle procedure.A new algorithm is proposed for solving penalized estimating equation.The asymptotic results are augmented by a simulation study.

  7. Sensor Optimization Selection Model Based on Testability Constraint

    Institute of Scientific and Technical Information of China (English)

    YANG Shuming; QIU Jing; LIU Guanjun

    2012-01-01

    Sensor selection and optimization is one of the important parts in design for testability.To address the problems that the traditional sensor optimization selection model does not take the requirements of prognostics and health management especially fault prognostics for testability into account and does not consider the impacts of sensor actual attributes on fault detectability,a novel sensor optimization selection model is proposed.Firstly,a universal architecture for sensor selection and optimization is provided.Secondly,a new testability index named fault predictable rate is defined to describe fault prognostics requirements for testability.Thirdly,a sensor selection and optimization model for prognostics and health management is constructed,which takes sensor cost as objective finction and the defined testability indexes as constraint conditions.Due to NP-hard property of the model,a generic algorithm is designed to obtain the optimal solution.At last,a case study is presented to demonstrate the sensor selection approach for a stable tracking servo platform.The application results and comparison analysis show the proposed model and algorithm are effective and feasible.This approach can be used to select sensors for prognostics and health management of any system.

  8. SELECTION MOMENTS AND GENERALIZED METHOD OF MOMENTS FOR HETEROSKEDASTIC MODELS

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2016-06-01

    Full Text Available In this paper, the authors describe the selection methods for moments and the application of the generalized moments method for the heteroskedastic models. The utility of GMM estimators is found in the study of the financial market models. The selection criteria for moments are applied for the efficient estimation of GMM for univariate time series with martingale difference errors, similar to those studied so far by Kuersteiner.

  9. Modeling Suspicious Email Detection using Enhanced Feature Selection

    OpenAIRE

    2013-01-01

    The paper presents a suspicious email detection model which incorporates enhanced feature selection. In the paper we proposed the use of feature selection strategies along with classification technique for terrorists email detection. The presented model focuses on the evaluation of machine learning algorithms such as decision tree (ID3), logistic regression, Na\\"ive Bayes (NB), and Support Vector Machine (SVM) for detecting emails containing suspicious content. In the literature, various algo...

  10. RUC at TREC 2014: Select Resources Using Topic Models

    Science.gov (United States)

    2014-11-01

    them being observed (i.e. sampled). To infer the topic Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the...Selection. In CIKM 2009, pages 1277-1286. [10] M. Baillie, M. Carmen, and F. Crestani. A Multiple- Collection Latent Topic Model for Federated...RUC at TREC 2014: Select Resources Using Topic Models Qiuyue Wang, Shaochen Shi, Wei Cao School of Information Renmin University of China Beijing

  11. Continuous time limits of the utterance selection model

    Science.gov (United States)

    Michaud, Jérôme

    2017-02-01

    In this paper we derive alternative continuous time limits of the utterance selection model (USM) for language change [G. J. Baxter et al., Phys. Rev. E 73, 046118 (2006), 10.1103/PhysRevE.73.046118]. This is motivated by the fact that the Fokker-Planck continuous time limit derived in the original version of the USM is only valid for a small range of parameters. We investigate the consequences of relaxing these constraints on parameters. Using the normal approximation of the multinomial approximation, we derive a continuous time limit of the USM in the form of a weak-noise stochastic differential equation. We argue that this weak noise, not captured by the Kramers-Moyal expansion, cannot be neglected. We then propose a coarse-graining procedure, which takes the form of a stochastic version of the heterogeneous mean field approximation. This approximation groups the behavior of nodes of the same degree, reducing the complexity of the problem. With the help of this approximation, we study in detail two simple families of networks: the regular networks and the star-shaped networks. The analysis reveals and quantifies a finite-size effect of the dynamics. If we increase the size of the network by keeping all the other parameters constant, we transition from a state where conventions emerge to a state where no convention emerges. Furthermore, we show that the degree of a node acts as a time scale. For heterogeneous networks such as star-shaped networks, the time scale difference can become very large, leading to a noisier behavior of highly connected nodes.

  12. Selection Criteria in Regime Switching Conditional Volatility Models

    Directory of Open Access Journals (Sweden)

    Thomas Chuffart

    2015-05-01

    Full Text Available A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.

  13. A guide to Bayesian model selection for ecologists

    Science.gov (United States)

    Hooten, Mevin B.; Hobbs, N.T.

    2015-01-01

    The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.

  14. The Use of Evolution in a Central Action Selection Model

    Directory of Open Access Journals (Sweden)

    F. Montes-Gonzalez

    2007-01-01

    Full Text Available The use of effective central selection provides flexibility in design by offering modularity and extensibility. In earlier papers we have focused on the development of a simple centralized selection mechanism. Our current goal is to integrate evolutionary methods in the design of non-sequential behaviours and the tuning of specific parameters of the selection model. The foraging behaviour of an animal robot (animat has been modelled in order to integrate the sensory information from the robot to perform selection that is nearly optimized by the use of genetic algorithms. In this paper we present how selection through optimization finally arranges the pattern of presented behaviours for the foraging task. Hence, the execution of specific parts in a behavioural pattern may be ruled out by the tuning of these parameters. Furthermore, the intensive use of colour segmentation from a colour camera for locating a cylinder sets a burden on the calculations carried out by the genetic algorithm.

  15. Partner Selection Optimization Model of Agricultural Enterprises in Supply Chain

    Directory of Open Access Journals (Sweden)

    Feipeng Guo

    2013-10-01

    Full Text Available With more and more importance of correctly selecting partners in supply chain of agricultural enterprises, a large number of partner evaluation techniques are widely used in the field of agricultural science research. This study established a partner selection model to optimize the issue of agricultural supply chain partner selection. Firstly, it constructed a comprehensive evaluation index system after analyzing the real characteristics of agricultural supply chain. Secondly, a heuristic method for attributes reduction based on rough set theory and principal component analysis was proposed which can reduce multiple attributes into some principal components, yet retaining effective evaluation information. Finally, it used improved BP neural network which has self-learning function to select partners. The empirical analysis on an agricultural enterprise shows that this model is effective and feasible for practical partner selection.

  16. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2013-01-01

    Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.

  17. Modeling and Solving the Liner Shipping Service Selection Problem

    DEFF Research Database (Denmark)

    Karsten, Christian Vad; Balakrishnan, Anant

    served less shipping costs. We propose a new hop-constrained multi-commodity arc flow model for the LSSSP that is based on an augmented network containing, for each candidate route, an arc (representing a sub-path) between every pair of ports that the route visits. This sub-path construct permits us...... to accurately model transshipment costs and incorporate routing policies such as maximum transit time, maritime cabotage rules, and operational alliances. Our hop-indexed arc flow model is smaller and easier to solve than path flow models. We outline a preprocessing procedure that exploits both the routing...

  18. Procedural learning as a measure of functional impairment in a mouse model of ischemic stroke.

    Science.gov (United States)

    Linden, Jérôme; Van de Beeck, Lise; Plumier, Jean-Christophe; Ferrara, André

    2016-07-01

    Basal ganglia stroke is often associated with functional deficits in patients, including difficulties to learn and execute new motor skills (procedural learning). To measure procedural learning in a murine model of stroke (30min right MCAO), we submitted C57Bl/6J mice to various sensorimotor tests, then to an operant procedure (Serial Order Learning) specifically assessing the ability to learn a simple motor sequence. Results showed that MCAO affected the performance in some of the sensorimotor tests (accelerated rotating rod and amphetamine rotation test) and the way animals learned a motor sequence. The later finding seems to be caused by difficulties regarding the chunking of operant actions into a coherent motor sequence; the appeal for food rewards and ability to press levers appeared unaffected by MCAO. We conclude that assessment of motor learning in rodent models of stroke might improve the translational value of such models.

  19. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    Most studies using Mare’s (1980, 1981) seminal model of educational transitions find that the effect of family background decreases across transitions. Recently, Cameron and Heckman (1998, 2001) have argued that the “waning coefficients” in the Mare model are driven by selection on unobserved...... the United States, United Kingdom, Denmark, and the Netherlands shows that when we take selection into account the effect of family background variables on educational transitions is largely constant across transitions. We also discuss several difficulties in estimating educational transition models which...... variables. This paper, first, explains theoretically how selection on unobserved variables leads to waning coefficients and, second, illustrates empirically how selection leads to biased estimates of the effect of family background on educational transitions. Our empirical analysis using data from...

  20. Multicriteria framework for selecting a process modelling language

    Science.gov (United States)

    Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel

    2016-01-01

    The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.

  1. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romanach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using

  2. The Chain-Link Fence Model: A Framework for Creating Security Procedures

    OpenAIRE

    Houghton, Robert F.

    2013-01-01

    A long standing problem in information technology security is how to help reduce the security footprint. Many specific proposals exist to address specific problems in information technology security. Most information technology solutions need to be repeatable throughout the course of an information systems lifecycle. The Chain-Link Fence Model is a new model for creating and implementing information technology procedures. This model was validated by two different methods: the first being int...

  3. A network society communicative model for optimizing the Refugee Status Determination (RSD procedures

    Directory of Open Access Journals (Sweden)

    Andrea Pacheco Pacífico

    2013-01-01

    Full Text Available This article recommends a new way to improve Refugee Status Determination (RSD procedures by proposing a network society communicative model based on active involvement and dialogue among all implementing partners. This model, named after proposals from Castells, Habermas, Apel, Chimni, and Betts, would be mediated by the United Nations High Commissioner for Refugees (UNHCR, whose role would be modeled after that of the International Committee of the Red Cross (ICRC practice.

  4. Models of microbiome evolution incorporating host and microbial selection.

    Science.gov (United States)

    Zeng, Qinglong; Wu, Steven; Sukumaran, Jeet; Rodrigo, Allen

    2017-09-25

    Numerous empirical studies suggest that hosts and microbes exert reciprocal selective effects on their ecological partners. Nonetheless, we still lack an explicit framework to model the dynamics of both hosts and microbes under selection. In a previous study, we developed an agent-based forward-time computational framework to simulate the neutral evolution of host-associated microbial communities in a constant-sized, unstructured population of hosts. These neutral models allowed offspring to sample microbes randomly from parents and/or from the environment. Additionally, the environmental pool of available microbes was constituted by fixed and persistent microbial OTUs and by contributions from host individuals in the preceding generation. In this paper, we extend our neutral models to allow selection to operate on both hosts and microbes. We do this by constructing a phenome for each microbial OTU consisting of a sample of traits that influence host and microbial fitnesses independently. Microbial traits can influence the fitness of hosts ("host selection") and the fitness of microbes ("trait-mediated microbial selection"). Additionally, the fitness effects of traits on microbes can be modified by their hosts ("host-mediated microbial selection"). We simulate the effects of these three types of selection, individually or in combination, on microbiome diversities and the fitnesses of hosts and microbes over several thousand generations of hosts. We show that microbiome diversity is strongly influenced by selection acting on microbes. Selection acting on hosts only influences microbiome diversity when there is near-complete direct or indirect parental contribution to the microbiomes of offspring. Unsurprisingly, microbial fitness increases under microbial selection. Interestingly, when host selection operates, host fitness only increases under two conditions: (1) when there is a strong parental contribution to microbial communities or (2) in the absence of a strong

  5. Testing exclusion restrictions and additive separability in sample selection models

    DEFF Research Database (Denmark)

    Huber, Martin; Mellace, Giovanni

    2014-01-01

    Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction of these......Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction...... of these assumptions by applying the approach of Huber and Mellace (Testing instrument validity for LATE identification based on inequality moment constraints, 2011) (for testing instrument validity under treatment endogeneity) to the sample selection framework. We show that the exclusion restriction and additive...... separability imply two testable inequality constraints that come from both point identifying and bounding the outcome distribution of the subpopulation that is always selected/observed. We apply the tests to two variables for which the exclusion restriction is frequently invoked in female wage regressions: non...

  6. Periodic Integration: Further Results on Model Selection and Forecasting

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1996-01-01

    textabstractThis paper considers model selection and forecasting issues in two closely related models for nonstationary periodic autoregressive time series [PAR]. Periodically integrated seasonal time series [PIAR] need a periodic differencing filter to remove the stochastic trend. On the other

  7. Selective and eco-friendly procedures for the synthesis of benzimidazole derivatives. The role of the Er(OTf3 catalyst in the reaction selectivity

    Directory of Open Access Journals (Sweden)

    Natividad Herrera Cano

    2016-11-01

    Full Text Available An improved and greener protocol for the synthesis of benzimidazole derivatives, starting from o-phenylenediamine, with different aldehydes is reported. Double-condensation products were selectively obtained when Er(OTf3 was used as the catalyst in the presence of electron-rich aldehydes. Conversely, the formation of mono-condensation products was the preferred path in absence of this catalyst. One of the major advantages of these reactions was the formation of a single product, avoiding extensive isolation and purification of products, which is frequently associated with these reactions.Theoretical calculations helped to understand the different reactivity established for these reactions. Thus, we found that the charge density on the oxygen of the carbonyl group has a significant impact on the reaction pathway. For instance, electron-rich aldehydes better coordinate to the catalyst, which favours the addition of the amine group to the carbonyl group, therefore facilitating the formation of double-condensation products.Reactions with aliphatic or aromatic aldehydes were possible, without using organic solvents and in a one-pot procedure with short reaction time (2–5 min, affording single products in excellent yields (75–99%. This convenient and eco-friendly methodology offers numerous benefits with respect to other protocols reported for similar compounds.

  8. Selection of effective maximal expiratory parameters to differentiate asthmatic patients from healthy adults by discriminant analysis using all possible selection procedure.

    Directory of Open Access Journals (Sweden)

    Meguro,Tadamichi

    1986-08-01

    Full Text Available Maximal expiratory volume-time and flow-volume (MEVT and MEFV curves were drawn for young male nonsmoking healthy adults and for young male nonsmoking asthmatic patients. Eleven parameters, two MEVT (%FVC and FEV1.0%, six MEFV (PFR, V75, V50, V25, V10 and V50/V25, and three MTC parameters (MTC75-50, MTC50-25 and MTC25-RV were used for the multivariate analysis. The multivariate analysis in this study consisted of correlation coefficient matrix computation, the test for mean values in the multivariates, and the linear discriminant analysis using the all possible selection procedure (APSP. Correlation coefficients among flow rate parameters and flow rate related parameters in high lung volumes were different between the two groups. In the eleven-parameter discriminant analysis by APSP using single parameters, PFR, V75 (flow rate at 75% of forced vital capacity, and FEV1.0% were considered to be the effective parameters. In the seven-parameter discriminant analysis using the parameter groups, the group of all parameters and the %FVC and flow rate-related parameter group were considered to be the effective numerical alternatives to MEFV curves discriminating between healthy adults and asthmatic patients.

  9. Selective and eco-friendly procedures for the synthesis of benzimidazole derivatives. The role of the Er(OTf)3 catalyst in the reaction selectivity.

    Science.gov (United States)

    Herrera Cano, Natividad; Uranga, Jorge G; Nardi, Mónica; Procopio, Antonio; Wunderlin, Daniel A; Santiago, Ana N

    2016-01-01

    An improved and greener protocol for the synthesis of benzimidazole derivatives, starting from o-phenylenediamine, with different aldehydes is reported. Double-condensation products were selectively obtained when Er(OTf)3 was used as the catalyst in the presence of electron-rich aldehydes. Conversely, the formation of mono-condensation products was the preferred path in absence of this catalyst. One of the major advantages of these reactions was the formation of a single product, avoiding extensive isolation and purification of products, which is frequently associated with these reactions. Theoretical calculations helped to understand the different reactivity established for these reactions. Thus, we found that the charge density on the oxygen of the carbonyl group has a significant impact on the reaction pathway. For instance, electron-rich aldehydes better coordinate to the catalyst, which favours the addition of the amine group to the carbonyl group, therefore facilitating the formation of double-condensation products. Reactions with aliphatic or aromatic aldehydes were possible, without using organic solvents and in a one-pot procedure with short reaction time (2-5 min), affording single products in excellent yields (75-99%). This convenient and eco-friendly methodology offers numerous benefits with respect to other protocols reported for similar compounds.

  10. Quantile hydrologic model selection and model structure deficiency assessment: 1. Theory

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies

  11. Quantile hydrologic model selection and model structure deficiency assessment: 1. Theory

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies structur

  12. AN EXPERT SYSTEM MODEL FOR THE SELECTION OF TECHNICAL PERSONNEL

    Directory of Open Access Journals (Sweden)

    Emine COŞGUN

    2005-03-01

    Full Text Available In this study, a model has been developed for the selection of the technical personnel. In the model Visual Basic has been used as user interface, Microsoft Access has been utilized as database system and CLIPS program has been used as expert system program. The proposed model has been developed by utilizing expert system technology. In the personnel selection process, only the pre-evaluation of the applicants has been taken into consideration. Instead of replacing the expert himself, a decision support program has been developed to analyze the data gathered from the job application forms. The attached study will assist the expert to make faster and more accurate decisions.

  13. Novel web service selection model based on discrete group search.

    Science.gov (United States)

    Zhai, Jie; Shao, Zhiqing; Guo, Yi; Zhang, Haiteng

    2014-01-01

    In our earlier work, we present a novel formal method for the semiautomatic verification of specifications and for describing web service composition components by using abstract concepts. After verification, the instantiations of components were selected to satisfy the complex service performance constraints. However, selecting an optimal instantiation, which comprises different candidate services for each generic service, from a large number of instantiations is difficult. Therefore, we present a new evolutionary approach on the basis of the discrete group search service (D-GSS) model. With regard to obtaining the optimal multiconstraint instantiation of the complex component, the D-GSS model has competitive performance compared with other service selection models in terms of accuracy, efficiency, and ability to solve high-dimensional service composition component problems. We propose the cost function and the discrete group search optimizer (D-GSO) algorithm and study the convergence of the D-GSS model through verification and test cases.

  14. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    Science.gov (United States)

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  15. Surgical procedures for a rat model of partial orthotopic liver transplantation with hepatic arterial reconstruction.

    Science.gov (United States)

    Nagai, Kazuyuki; Yagi, Shintaro; Uemoto, Shinji; Tolba, Rene H

    2013-03-07

    Orthotopic liver transplantation (OLT) in rats using a whole or partial graft is an indispensable experimental model for transplantation research, such as studies on graft preservation and ischemia-reperfusion injury, immunological responses, hemodynamics, and small-for-size syndrome. The rat OLT is among the most difficult animal models in experimental surgery and demands advanced microsurgical skills that take a long time to learn. Consequently, the use of this model has been limited. Since the reliability and reproducibility of results are key components of the experiments in which such complex animal models are used, it is essential for surgeons who are involved in rat OLT to be trained in well-standardized and sophisticated procedures for this model. While various techniques and modifications of OLT in rats have been reported since the first model was described by Lee et al. in 1973, the elimination of the hepatic arterial reconstruction and the introduction of the cuff anastomosis technique by Kamada et al. were a major advancement in this model, because they simplified the reconstruction procedures to a great degree. In the model by Kamada et al., the hepatic rearterialization was also eliminated. Since rats could survive without hepatic arterial flow after liver transplantation, there was considerable controversy over the value of hepatic arterialization. However, the physiological superiority of the arterialized model has been increasingly acknowledged, especially in terms of preserving the bile duct system and the liver integrity. In this article, we present detailed surgical procedures for a rat model of OLT with hepatic arterial reconstruction using a 50% partial graft after ex vivo liver resection. The reconstruction procedures for each vessel and the bile duct are performed by the following methods: a 7-0 polypropylene continuous suture for the supra- and infrahepatic vena cava; a cuff technique for the portal vein; and a stent technique for the

  16. An incremental procedure model for e-learning projects at universities

    Directory of Open Access Journals (Sweden)

    Pahlke, Friedrich

    2006-11-01

    Full Text Available E-learning projects at universities are produced under different conditions than in industry. The main characteristic of many university projects is that these are realized quasi in a solo effort. In contrast, in private industry the different, interdisciplinary skills that are necessary for the development of e-learning are typically supplied by a multimedia agency.A specific procedure tailored for the use at universities is therefore required to facilitate mastering the amount and complexity of the tasks.In this paper an incremental procedure model is presented, which describes the proceeding in every phase of the project. It allows a high degree of flexibility and emphasizes the didactical concept – instead of the technical implementation. In the second part, we illustrate the practical use of the theoretical procedure model based on the project “Online training in Genetic Epidemiology”.

  17. Factor selection and structural identification in the interaction ANOVA model.

    Science.gov (United States)

    Post, Justin B; Bondell, Howard D

    2013-03-01

    When faced with categorical predictors and a continuous response, the objective of an analysis often consists of two tasks: finding which factors are important and determining which levels of the factors differ significantly from one another. Often times, these tasks are done separately using Analysis of Variance (ANOVA) followed by a post hoc hypothesis testing procedure such as Tukey's Honestly Significant Difference test. When interactions between factors are included in the model the collapsing of levels of a factor becomes a more difficult problem. When testing for differences between two levels of a factor, claiming no difference would refer not only to equality of main effects, but also to equality of each interaction involving those levels. This structure between the main effects and interactions in a model is similar to the idea of heredity used in regression models. This article introduces a new method for accomplishing both of the common analysis tasks simultaneously in an interaction model while also adhering to the heredity-type constraint on the model. An appropriate penalization is constructed that encourages levels of factors to collapse and entire factors to be set to zero. It is shown that the procedure has the oracle property implying that asymptotically it performs as well as if the exact structure were known beforehand. We also discuss the application to estimating interactions in the unreplicated case. Simulation studies show the procedure outperforms post hoc hypothesis testing procedures as well as similar methods that do not include a structural constraint. The method is also illustrated using a real data example.

  18. Factor Selection and Structural Identification in the Interaction ANOVA Model

    Science.gov (United States)

    Post, Justin B.; Bondell, Howard D.

    2013-01-01

    Summary When faced with categorical predictors and a continuous response, the objective of analysis often consists of two tasks: finding which factors are important and determining which levels of the factors differ significantly from one another. Often times these tasks are done separately using Analysis of Variance (ANOVA) followed by a post-hoc hypothesis testing procedure such as Tukey’s Honestly Significant Difference test. When interactions between factors are included in the model the collapsing of levels of a factor becomes a more difficult problem. When testing for differences between two levels of a factor, claiming no difference would refer not only to equality of main effects, but also equality of each interaction involving those levels. This structure between the main effects and interactions in a model is similar to the idea of heredity used in regression models. This paper introduces a new method for accomplishing both of the common analysis tasks simultaneously in an interaction model while also adhering to the heredity-type constraint on the model. An appropriate penalization is constructed that encourages levels of factors to collapse and entire factors to be set to zero. It is shown that the procedure has the oracle property implying that asymptotically it performs as well as if the exact structure were known beforehand. We also discuss the application to estimating interactions in the unreplicated case. Simulation studies show the procedure outperforms post hoc hypothesis testing procedures as well as similar methods that do not include a structural constraint. The method is also illustrated using a real data example. PMID:23323643

  19. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    Science.gov (United States)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-03-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.

  20. Selection of climate change scenario data for impact modelling

    DEFF Research Database (Denmark)

    Sloth Madsen, M; Fox Maule, C; MacKellar, N

    2012-01-01

    Impact models investigating climate change effects on food safety often need detailed climate data. The aim of this study was to select climate change projection data for selected crop phenology and mycotoxin impact models. Using the ENSEMBLES database of climate model output, this study...... illustrates how the projected climate change signal of important variables as temperature, precipitation and relative humidity depends on the choice of the climate model. Using climate change projections from at least two different climate models is recommended to account for model uncertainty. To make...... the climate projections suitable for impact analysis at the local scale a weather generator approach was adopted. As the weather generator did not treat all the necessary variables, an ad-hoc statistical method was developed to synthesise realistic values of missing variables. The method is presented...

  1. A Generic Procedure for the Assessment of the Effect of Concrete Admixtures on the Sorption of Radionuclides on Cement: Concept and Selected Results

    Energy Technology Data Exchange (ETDEWEB)

    Glaus, M.A.; Laube, A.; Van Loon, L.R

    2004-03-01

    A screening procedure is proposed for the assessment of the effect of concrete admixtures on the sorption of radionuclides by cement. The procedure is both broad and generic, and can thus be used as input for the assessment of concrete admixtures which might be used in the future. The experimental feasibility and significance of the screening procedure are tested using selected concrete admixtures: i.e. sulfonated naphthalene-formaldehyde condensates, lignosulfonates, and a plasticiser used at PSI for waste conditioning. The effect of these on the sorption properties of Ni(II), Eu(III) and Th(IV) in cement is investigated using crushed Hardened Cement Paste (HCP), as well as cement pastes prepared in the presence of these admixtures. Strongly adverse effects on the sorption of the radionuclides tested are observed only in single cases, and under extreme conditions: i.e. at high ratios of concrete admixtures to HCP, and at low ratios of HCP to cement pore water. Under realistic conditions, both radionuclide sorption and the sorption of isosaccharinic acid (a strong complexant produced in cement-conditioned wastes containing cellulose) remain unaffected by the presence of concrete admixtures, which can be explained by the sorption of them onto the HCP. The pore-water concentrations of the concrete admixtures tested are thereby reduced to levels at which the formation of radionuclide complexes is no longer of importance. Further, the Langmuir sorption model, proposed for the sorption of concrete admixtures on HCP, suggests that the HCP surface does not become saturated, at least for those concrete admixtures tested. (author)

  2. Fuzzy MCDM Model for Risk Factor Selection in Construction Projects

    Directory of Open Access Journals (Sweden)

    Pejman Rezakhani

    2012-11-01

    Full Text Available Risk factor selection is an important step in a successful risk management plan. There are many risk factors in a construction project and by an effective and systematic risk selection process the most critical risks can be distinguished to have more attention. In this paper through a comprehensive literature survey, most significant risk factors in a construction project are classified in a hierarchical structure. For an effective risk factor selection, a modified rational multi criteria decision making model (MCDM is developed. This model is a consensus rule based model and has the optimization property of rational models. By applying fuzzy logic to this model, uncertainty factors in group decision making such as experts` influence weights, their preference and judgment for risk selection criteria will be assessed. Also an intelligent checking process to check the logical consistency of experts` preferences will be implemented during the decision making process. The solution inferred from this method is in the highest degree of acceptance of group members. Also consistency of individual preferences is checked by some inference rules. This is an efficient and effective approach to prioritize and select risks based on decisions made by group of experts in construction projects. The applicability of presented method is assessed through a case study.

  3. A Hybrid Program Projects Selection Model for Nonprofit TV Stations

    Directory of Open Access Journals (Sweden)

    Kuei-Lun Chang

    2015-01-01

    Full Text Available This study develops a hybrid multiple criteria decision making (MCDM model to select program projects for nonprofit TV stations on the basis of managers’ perceptions. By the concept of balanced scorecard (BSC and corporate social responsibility (CSR, we collect criteria for selecting the best program project. Fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Next, considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain the weights of them. To avoid calculation and additional pairwise comparisons of ANP, technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. A case study is presented to demonstrate the applicability of the proposed model.

  4. A SUPPLIER SELECTION MODEL FOR SOFTWARE DEVELOPMENT OUTSOURCING

    Directory of Open Access Journals (Sweden)

    Hancu Lucian-Viorel

    2010-12-01

    Full Text Available This paper presents a multi-criteria decision making model used for supplier selection for software development outsourcing on e-marketplaces. This model can be used in auctions. The supplier selection process becomes complex and difficult on last twenty years since the Internet plays an important role in business management. Companies have to concentrate their efforts on their core activities and the others activities should be realized by outsourcing. They can achieve significant cost reduction by using e-marketplaces in their purchase process and by using decision support systems on supplier selection. In the literature were proposed many approaches for supplier evaluation and selection process. The performance of potential suppliers is evaluated using multi criteria decision making methods rather than considering a single factor cost.

  5. Adverse Selection Models with Three States of Nature

    Directory of Open Access Journals (Sweden)

    Daniela MARINESCU

    2011-02-01

    Full Text Available In the paper we analyze an adverse selection model with three states of nature, where both the Principal and the Agent are risk neutral. When solving the model, we use the informational rents and the efforts as variables. We derive the optimal contract in the situation of asymmetric information. The paper ends with the characteristics of the optimal contract and the main conclusions of the model.

  6. Comparing uncertainty resulting from two-step and global regression procedures applied to microbial growth models.

    Science.gov (United States)

    Martino, K G; Marks, B P

    2007-12-01

    Two different microbial modeling procedures were compared and validated against independent data for Listeria monocytogenes growth. The most generally used method is two consecutive regressions: growth parameters are estimated from a primary regression of microbial counts, and a secondary regression relates the growth parameters to experimental conditions. A global regression is an alternative method in which the primary and secondary models are combined, giving a direct relationship between experimental factors and microbial counts. The Gompertz equation was the primary model, and a response surface model was the secondary model. Independent data from meat and poultry products were used to validate the modeling procedures. The global regression yielded the lower standard errors of calibration, 0.95 log CFU/ml for aerobic and 1.21 log CFU/ml for anaerobic conditions. The two-step procedure yielded errors of 1.35 log CFU/ml for aerobic and 1.62 log CFU/ ml for anaerobic conditions. For food products, the global regression was more robust than the two-step procedure for 65% of the cases studied. The robustness index for the global regression ranged from 0.27 (performed better than expected) to 2.60. For the two-step method, the robustness index ranged from 0.42 to 3.88. The predictions were overestimated (fail safe) in more than 50% of the cases using the global regression and in more than 70% of the cases using the two-step regression. Overall, the global regression performed better than the two-step procedure for this specific application.

  7. Bayesian model selection for constrained multivariate normal linear models

    NARCIS (Netherlands)

    Mulder, J.

    2010-01-01

    The expectations that researchers have about the structure in the data can often be formulated in terms of equality constraints and/or inequality constraints on the parameters in the model that is used. In a (M)AN(C)OVA model, researchers have expectations about the differences between the

  8. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations.

  9. A new procedure to built a model covariance matrix: first results

    Science.gov (United States)

    Barzaghi, R.; Marotta, A. M.; Splendore, R.; Borghi, A.

    2012-04-01

    In order to validate the results of geophysical models a common procedure is to compare model predictions with observations by means of statistical tests. A limit of this approach is the lack of a covariance matrix associated to model results, that may frustrate the achievement of a confident statistical significance of the results. Trying to overcome this limit, we have implemented a new procedure to build a model covariance matrix that could allow a more reliable statistical analysis. This procedure has been developed in the frame of the thermo-mechanical model described in Splendore et al. (2010), that predicts the present-day crustal velocity field in the Tyrrhenian due to Africa-Eurasia convergence and to lateral rheological heterogeneities of the lithosphere. Modelled tectonic velocity field has been compared to the available surface velocity field based on GPS observation, determining the best fit model and the degree of fitting, through the use of a χ2 test. Once we have identified the key models parameters and defined their appropriate ranges of variability, we have run 100 different models for 100 sets of randomly values of the parameters extracted within the corresponding interval, obtaining a stack of 100 velocity fields. Then, we calculated variance and empirical covariance for the stack of results, taking into account also cross-correlation, obtaining a positive defined, diagonal matrix that represents the covariance matrix of the model. This empirical approach allows us to define a more robust statistical analysis with respect the classic approach. Reference Splendore, Marotta, Barzaghi, Borghi and Cannizzaro, 2010. Block model versus thermomechanical model: new insights on the present-day regional deformation in the surroundings of the Calabrian Arc. In: Spalla, Marotta and Gosso (Eds) Advances in Interpretation of Geological Processes: Refinement of Multi scale Data and Integration in Numerical Modelling. Geological Society, London, Special

  10. A new experimental procedure for incorporation of model contaminants in polymer hosts

    NARCIS (Netherlands)

    Papaspyrides, C.D.; Voultzatis, Y.; Pavlidou, S.; Tsenoglou, C.; Dole, P.; Feigenbaum, A.; Paseiro, P.; Pastorelli, S.; Cruz Garcia, C. de la; Hankemeier, T.; Aucejo, S.

    2005-01-01

    A new experimental procedure for incorporation of model contaminants in polymers was developed as part of a general scheme for testing the efficiency of functional barriers in food packaging. The aim was to progressively pollute polymers in a controlled fashion up to a high level in the range of 100

  11. A new experimental procedure for incorporation of model contaminants in polymer hosts

    NARCIS (Netherlands)

    Papaspyrides, C.D.; Voultzatis, Y.; Pavlidou, S.; Tsenoglou, C.; Dole, P.; Feigenbaum, A.; Paseiro, P.; Pastorelli, S.; Cruz Garcia, C. de la; Hankemeier, T.; Aucejo, S.

    2005-01-01

    A new experimental procedure for incorporation of model contaminants in polymers was developed as part of a general scheme for testing the efficiency of functional barriers in food packaging. The aim was to progressively pollute polymers in a controlled fashion up to a high level in the range of 100

  12. Studying Differential Item Functioning via Latent Variable Modeling: A Note on a Multiple-Testing Procedure

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.; Lee, Chun-Lung; Chang, Chi

    2013-01-01

    This note is concerned with a latent variable modeling approach for the study of differential item functioning in a multigroup setting. A multiple-testing procedure that can be used to evaluate group differences in response probabilities on individual items is discussed. The method is readily employed when the aim is also to locate possible…

  13. User Acceptance of YouTube for Procedural Learning: An Extension of the Technology Acceptance Model

    Science.gov (United States)

    Lee, Doo Young; Lehto, Mark R.

    2013-01-01

    The present study was framed using the Technology Acceptance Model (TAM) to identify determinants affecting behavioral intention to use YouTube. Most importantly, this research emphasizes the motives for using YouTube, which is notable given its extrinsic task goal of being used for procedural learning tasks. Our conceptual framework included two…

  14. The role of motor memory in action selection and procedural learning: insights from children with typical and atypical development

    Directory of Open Access Journals (Sweden)

    Jessica Tallet

    2015-07-01

    Full Text Available Motor memory is the process by which humans can adopt both persistent and flexible motor behaviours. Persistence and flexibility can be assessed through the examination of the cooperation/competition between new and old motor routines in the motor memory repertoire. Two paradigms seem to be particularly relevant to examine this competition/cooperation. First, a manual search task for hidden objects, namely the C-not-B task, which allows examining how a motor routine may influence the selection of action in toddlers. The second paradigm is procedural learning, and more precisely the consolidation stage, which allows assessing how a previously learnt motor routine becomes resistant to subsequent programming or learning of a new – competitive – motor routine. The present article defends the idea that results of both paradigms give precious information to understand the evolution of motor routines in healthy children. Moreover, these findings echo some clinical observations in developmental neuropsychology, particularly in children with Developmental Coordination Disorder. Such studies suggest that the level of equilibrium between persistence and flexibility of motor routines is an index of the maturity of the motor system.

  15. Genetic signatures of natural selection in a model invasive ascidian

    Science.gov (United States)

    Lin, Yaping; Chen, Yiyong; Yi, Changho; Fong, Jonathan J.; Kim, Won; Rius, Marc; Zhan, Aibin

    2017-01-01

    Invasive species represent promising models to study species’ responses to rapidly changing environments. Although local adaptation frequently occurs during contemporary range expansion, the associated genetic signatures at both population and genomic levels remain largely unknown. Here, we use genome-wide gene-associated microsatellites to investigate genetic signatures of natural selection in a model invasive ascidian, Ciona robusta. Population genetic analyses of 150 individuals sampled in Korea, New Zealand, South Africa and Spain showed significant genetic differentiation among populations. Based on outlier tests, we found high incidence of signatures of directional selection at 19 loci. Hitchhiking mapping analyses identified 12 directional selective sweep regions, and all selective sweep windows on chromosomes were narrow (~8.9 kb). Further analyses indentified 132 candidate genes under selection. When we compared our genetic data and six crucial environmental variables, 16 putatively selected loci showed significant correlation with these environmental variables. This suggests that the local environmental conditions have left significant signatures of selection at both population and genomic levels. Finally, we identified “plastic” genomic regions and genes that are promising regions to investigate evolutionary responses to rapid environmental change in C. robusta. PMID:28266616

  16. IT vendor selection model by using structural equation model & analytical hierarchy process

    Science.gov (United States)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  17. A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist;

    2015-01-01

    exercise, thereby bypassing the challenging task of model structure determination and identification. Parameter identification problems can thus lead to ill-calibrated models with low predictive power and large model uncertainty. Every calibration exercise should therefore be precededby a proper model...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...

  18. Experimental testing procedures and dynamic model validation for vanadium redox flow battery storage system

    Science.gov (United States)

    Baccino, Francesco; Marinelli, Mattia; Nørgård, Per; Silvestro, Federico

    2014-05-01

    The paper aims at characterizing the electrochemical and thermal parameters of a 15 kW/320 kWh vanadium redox flow battery (VRB) installed in the SYSLAB test facility of the DTU Risø Campus and experimentally validating the proposed dynamic model realized in Matlab-Simulink. The adopted testing procedure consists of analyzing the voltage and current values during a power reference step-response and evaluating the relevant electrochemical parameters such as the internal resistance. The results of different tests are presented and used to define the electrical characteristics and the overall efficiency of the battery system. The test procedure has general validity and could also be used for other storage technologies. The storage model proposed and described is suitable for electrical studies and can represent a general model in terms of validity. Finally, the model simulation outputs are compared with experimental measurements during a discharge-charge sequence.

  19. An innovative 3-D numerical modelling procedure for simulating repository-scale excavations in rock - SAFETI

    Energy Technology Data Exchange (ETDEWEB)

    Young, R. P.; Collins, D.; Hazzard, J.; Heath, A. [Department of Earth Sciences, Liverpool University, 4 Brownlow street, UK-0 L69 3GP Liverpool (United Kingdom); Pettitt, W.; Baker, C. [Applied Seismology Consultants LTD, 10 Belmont, Shropshire, UK-S41 ITE Shrewsbury (United Kingdom); Billaux, D.; Cundall, P.; Potyondy, D.; Dedecker, F. [Itasca Consultants S.A., Centre Scientifique A. Moiroux, 64, chemin des Mouilles, F69130 Ecully (France); Svemar, C. [Svensk Karnbranslemantering AB, SKB, Aspo Hard Rock Laboratory, PL 300, S-57295 Figeholm (Sweden); Lebon, P. [ANDRA, Parc de la Croix Blanche, 7, rue Jean Monnet, F-92298 Chatenay-Malabry (France)

    2004-07-01

    This paper presents current results from work performed within the European Commission project SAFETI. The main objective of SAFETI is to develop and test an innovative 3D numerical modelling procedure that will enable the 3-D simulation of nuclear waste repositories in rock. The modelling code is called AC/DC (Adaptive Continuum/ Dis-Continuum) and is partially based on Itasca Consulting Group's Particle Flow Code (PFC). Results are presented from the laboratory validation study where algorithms and procedures have been developed and tested to allow accurate 'Models for Rock' to be produced. Preliminary results are also presented on the use of AC/DC with parallel processors and adaptive logic. During the final year of the project a detailed model of the Prototype Repository Experiment at SKB's Hard Rock Laboratory will be produced using up to 128 processors on the parallel super computing facility at Liverpool University. (authors)

  20. Parameter Vertex Color Pada Animation Procedural 3D Model Vegetasi Musaceae

    Directory of Open Access Journals (Sweden)

    I Gede Ngurah Arya Indrayasa

    2017-02-01

    Full Text Available Penggunaan vegetasi untuk industri film, video game, simulasi, dan arsitektur visualisas merupakan faktor penting untuk menghasilkan adegan pemandangan alam lebih hidup. Penelitian ini bertujuan untuk mengetahui pengaruh dari vertex color terhadap efek angin  pada animasi prosedural 3d model vegetasi musaceae serta parameter vertex color yang tepat untuk menghasilkan animasi 3d model vegetasi musaceae realistis. Hasil akhir yang di capai adalah meneliti apakah perubahan parameter vertex color dapat mempengaruhi bentuk animasi procedural 3d vegetasi musaceae serta pengaruh dari vertex color terhadap efek angin pada animasi prosedural 3d model vegetasi Musaceae. Berdasarkan pengamat dan perbandingan pada pengujian 5 sample vertex color diperoleh hasil bahwa perubahan parameter vertex color dapat mempengaruhi bentuk animasi procedural 3d vegetasi musaceae serta di peroleh kesimpulan Sample No.5 merupakan parameter vertex color yang tepat untuk menghasilkan animasi 3d model vegetasi Musaceae yang realistis. Kata kunci—3D, Animasi Prosedural, Vegetation  

  1. Selecting Optimal Subset of Features for Student Performance Model

    Directory of Open Access Journals (Sweden)

    Hany M. Harb

    2012-09-01

    Full Text Available Educational data mining (EDM is a new growing research area and the essence of data mining concepts are used in the educational field for the purpose of extracting useful information on the student behavior in the learning process. Classification methods like decision trees, rule mining, and Bayesian network, can be applied on the educational data for predicting the student behavior like performance in an examination. This prediction may help in student evaluation. As the feature selection influences the predictive accuracy of any performance model, it is essential to study elaborately the effectiveness of student performance model in connection with feature selection techniques. The main objective of this work is to achieve high predictive performance by adopting various feature selection techniques to increase the predictive accuracy with least number of features. The outcomes show a reduction in computational time and constructional cost in both training and classification phases of the student performance model.

  2. Short-Run Asset Selection using a Logistic Model

    Directory of Open Access Journals (Sweden)

    Walter Gonçalves Junior

    2011-06-01

    Full Text Available Investors constantly look for significant predictors and accurate models to forecast future results, whose occasional efficacy end up being neutralized by market efficiency. Regardless, such predictors are widely used for seeking better (and more unique perceptions. This paper aims to investigate to what extent some of the most notorious indicators have discriminatory power to select stocks, and if it is feasible with such variables to build models that could anticipate those with good performance. In order to do that, logistical regressions were conducted with stocks traded at Bovespa using the selected indicators as explanatory variables. Investigated in this study were the outputs of Bovespa Index, liquidity, the Sharpe Ratio, ROE, MB, size and age evidenced to be significant predictors. Also examined were half-year, logistical models, which were adjusted in order to check the potential acceptable discriminatory power for the asset selection.

  3. Sample selection and taste correlation in discrete choice transport modelling

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2008-01-01

    the question for a broader class of models. It is shown that the original result may be somewhat generalised. Another question investigated is whether mode choice operates as a self-selection mechanism in the estimation of the value of travel time. The results show that self-selection can at least partly...... explain counterintuitive results in value of travel time estimation. However, the results also point at the difficulty of finding suitable instruments for the selection mechanism. Taste heterogeneity is another important aspect of discrete choice modelling. Mixed logit models are designed to capture...... of taste correlation in willingness-to-pay estimation are presented. The first contribution addresses how to incorporate taste correlation in the estimation of the value of travel time for public transport. Given a limited dataset the approach taken is to use theory on the value of travel time as guidance...

  4. From Reactionary to Responsive: Applying the Internal Environmental Scan Protocol to Lifelong Learning Strategic Planning and Operational Model Selection

    Science.gov (United States)

    Downing, David L.

    2009-01-01

    This study describes and implements a necessary preliminary strategic planning procedure, the Internal Environmental Scanning (IES), and discusses its relevance to strategic planning and university-sponsored lifelong learning program model selection. Employing a qualitative research methodology, a proposed lifelong learning-centric IES process…

  5. From Reactionary to Responsive: Applying the Internal Environmental Scan Protocol to Lifelong Learning Strategic Planning and Operational Model Selection

    Science.gov (United States)

    Downing, David L.

    2009-01-01

    This study describes and implements a necessary preliminary strategic planning procedure, the Internal Environmental Scanning (IES), and discusses its relevance to strategic planning and university-sponsored lifelong learning program model selection. Employing a qualitative research methodology, a proposed lifelong learning-centric IES process…

  6. A Review of Models and Procedures for Synthetic Validation for Entry-Level Army Jobs

    Science.gov (United States)

    1988-12-01

    ARI Research Note 88-107 A Review of Models and Procedures for Co Synthetic Validation for Entry-LevelM -£.2 Army Jobs C i Jennifer L. Crafts, Philip...of Models and Procecures for Synthetic Validation for Entry-Level Army Jobs 12. PERSONAL AUTHOR(S) Crafts, Jennifer L., Szenas, Fhilip L., Chia, Wel...well as ability. ProJect A Validity Results Campbell (1986) and McHerry, Houigh. Thquam, Hanson, and Ashworth (1987) have conducted extensive

  7. How can selection of biologically inspired features improve the performance of a robust object recognition model?

    Directory of Open Access Journals (Sweden)

    Masoud Ghodrati

    Full Text Available Humans can effectively and swiftly recognize objects in complex natural scenes. This outstanding ability has motivated many computational object recognition models. Most of these models try to emulate the behavior of this remarkable system. The human visual system hierarchically recognizes objects in several processing stages. Along these stages a set of features with increasing complexity is extracted by different parts of visual system. Elementary features like bars and edges are processed in earlier levels of visual pathway and as far as one goes upper in this pathway more complex features will be spotted. It is an important interrogation in the field of visual processing to see which features of an object are selected and represented by the visual cortex. To address this issue, we extended a hierarchical model, which is motivated by biology, for different object recognition tasks. In this model, a set of object parts, named patches, extracted in the intermediate stages. These object parts are used for training procedure in the model and have an important role in object recognition. These patches are selected indiscriminately from different positions of an image and this can lead to the extraction of non-discriminating patches which eventually may reduce the performance. In the proposed model we used an evolutionary algorithm approach to select a set of informative patches. Our reported results indicate that these patches are more informative than usual random patches. We demonstrate the strength of the proposed model on a range of object recognition tasks. The proposed model outperforms the original model in diverse object recognition tasks. It can be seen from the experiments that selected features are generally particular parts of target images. Our results suggest that selected features which are parts of target objects provide an efficient set for robust object recognition.

  8. How can selection of biologically inspired features improve the performance of a robust object recognition model?

    Science.gov (United States)

    Ghodrati, Masoud; Khaligh-Razavi, Seyed-Mahdi; Ebrahimpour, Reza; Rajaei, Karim; Pooyan, Mohammad

    2012-01-01

    Humans can effectively and swiftly recognize objects in complex natural scenes. This outstanding ability has motivated many computational object recognition models. Most of these models try to emulate the behavior of this remarkable system. The human visual system hierarchically recognizes objects in several processing stages. Along these stages a set of features with increasing complexity is extracted by different parts of visual system. Elementary features like bars and edges are processed in earlier levels of visual pathway and as far as one goes upper in this pathway more complex features will be spotted. It is an important interrogation in the field of visual processing to see which features of an object are selected and represented by the visual cortex. To address this issue, we extended a hierarchical model, which is motivated by biology, for different object recognition tasks. In this model, a set of object parts, named patches, extracted in the intermediate stages. These object parts are used for training procedure in the model and have an important role in object recognition. These patches are selected indiscriminately from different positions of an image and this can lead to the extraction of non-discriminating patches which eventually may reduce the performance. In the proposed model we used an evolutionary algorithm approach to select a set of informative patches. Our reported results indicate that these patches are more informative than usual random patches. We demonstrate the strength of the proposed model on a range of object recognition tasks. The proposed model outperforms the original model in diverse object recognition tasks. It can be seen from the experiments that selected features are generally particular parts of target images. Our results suggest that selected features which are parts of target objects provide an efficient set for robust object recognition.

  9. Renormalization procedure for random tensor networks and the canonical tensor model

    CERN Document Server

    Sasakura, Naoki

    2015-01-01

    We discuss a renormalization procedure for random tensor networks, and show that the corresponding renormalization-group flow is given by the Hamiltonian vector flow of the canonical tensor model, which is a discretized model of quantum gravity. The result is the generalization of the previous one concerning the relation between the Ising model on random networks and the canonical tensor model with N=2. We also prove a general theorem which relates discontinuity of the renormalization-group flow and the phase transitions of random tensor networks.

  10. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  11. Combining Decision Diagrams and SAT Procedures for Efficient Symbolic Model Checking

    DEFF Research Database (Denmark)

    Williams, Poul Frederick; Biere, Armin; Clarke, Edmund M.

    2000-01-01

    in the specification of a 16 bit multiplier. As opposed to Bounded Model Checking (BMC) our method is complete in practice. Our technique is based on a quantification procedure that allows us to eliminate quantifiers in Quantified Boolean Formulas (QBF). The basic step of this procedure is the up-one operation...... for BEDs. In addition we list a number of important optimizations to reduce the number of basic steps. In particular the optimization rule of quantification-by-substitution turned out to be very useful: exists x : {g /\\ ( x f )} = g[f/x]. The rule is used (1) during fixed point iterations, (2) for deciding...

  12. Detection Procedure for a Single Additive Outlier and Innovational Outlier in a Bilinear Model

    Directory of Open Access Journals (Sweden)

    Azami Zaharim

    2007-01-01

    Full Text Available A single outlier detection procedure for data generated from BL(1,1,1,1 models is developed. It is carried out in three stages. Firstly, the measure of impact of an IO and AO denoted by IO ω , AO ω , respectively are derived based on least squares method. Secondly, test statistics and test criteria are defined for classifying an observation as an outlier of its respective type. Finally, a general single outlier detection procedure is presented to distinguish a particular type of outlier at a time point t.

  13. A Modeling Procedure by Means of Multicriteria Analysis: Application in the Case of Specific Surface Estimation of Anodized Aluminium

    Science.gov (United States)

    Fotilas, Panayiotis; Batzias, Athanasios F.

    2009-08-01

    A methodological framework designed/developed under the form of an algorithmic procedure (including 20 activity stages and 10 decision nodes) has been applied for multicriteria ranking of models. The criteria used are: fitting to experimental data, agreement with theoretical aspects, model simplicity, experimental falsifiability, progressiveness, and relation to other ISs, as proved by a common path/rationale of deduction. An implementation is presented referring to the selection of pore ideal structure of anodized aluminium among the alternatives: cylindrical (A1), truncated-cone-like (A2), trumpet-like (A3), vesica-like (A4), multiple-base (A5), and tilted-cylinder-like (A6). The alternative A2 (implying corresponding specific surface estimation of the anodic film) was ranked first and the solution was proved to be robust.

  14. Glaucoma-inducing Procedure in an In Vivo Rat Model and Whole-mount Retina Preparation.

    Science.gov (United States)

    Gossman, Cynthia A; Linn, David M; Linn, Cindy

    2016-01-01

    Glaucoma is a disease of the central nervous system affecting retinal ganglion cells (RGCs). RGC axons making up the optic nerve carry visual input to the brain for visual perception. Damage to RGCs and their axons leads to vision loss and/or blindness. Although the specific cause of glaucoma is unknown, the primary risk factor for the disease is an elevated intraocular pressure. Glaucoma-inducing procedures in animal models are a valuable tool to researchers studying the mechanism of RGC death. Such information can lead to the development of effective neuroprotective treatments that could aid in the prevention of vision loss. The protocol in this paper describes a method of inducing glaucoma - like conditions in an in vivo rat model where 50 µl of 2 M hypertonic saline is injected into the episcleral venous plexus. Blanching of the vessels indicates successful injection. This procedure causes loss of RGCs to simulate glaucoma. One month following injection, animals are sacrificed and eyes are removed. Next, the cornea, lens, and vitreous are removed to make an eyecup. The retina is then peeled from the back of the eye and pinned onto sylgard dishes using cactus needles. At this point, neurons in the retina can be stained for analysis. Results from this lab show that approximately 25% of RGCs are lost within one month of the procedure when compared to internal controls. This procedure allows for quantitative analysis of retinal ganglion cell death in an in vivo rat glaucoma model.

  15. Partner Selection in a Virtual Enterprise: A Group Multiattribute Decision Model with Weighted Possibilistic Mean Values

    Directory of Open Access Journals (Sweden)

    Fei Ye

    2013-01-01

    Full Text Available This paper proposes an extended technique for order preference by similarity to ideal solution (TOPSIS for partner selection in a virtual enterprise (VE. The imprecise and fuzzy information of the partner candidate and the risk preferences of decision makers are both considered in the group multiattribute decision-making model. The weighted possibilistic mean values are used to handle triangular fuzzy numbers in the fuzzy environment. A ranking procedure for partner candidates is developed to help decision makers with varying risk preferences select the most suitable partners. Numerical examples are presented to reflect the feasibility and efficiency of the proposed TOPSIS. Results show that the varying risk preferences of decision makers play a significant role in the partner selection process in VE under a fuzzy environment.

  16. A Long-Term Memory Competitive Process Model of a Common Procedural Error. Part II: Working Memory Load and Capacity

    Science.gov (United States)

    2013-07-01

    A Long-Term Memory Competitive Process Model of a Common Procedural Error, Part II: Working Memory Load and Capacity Franklin P. Tamborello, II...00-00-2013 4. TITLE AND SUBTITLE A Long-Term Memory Competitive Process Model of a Common Procedural Error, Part II: Working Memory Load and...07370024.2011.601692 Tamborello, F. P., & Trafton, J. G. (2013). A long-term competitive process model of a common procedural error. In Proceedings of the 35th

  17. Bayesian selection of nucleotide substitution models and their site assignments.

    Science.gov (United States)

    Wu, Chieh-Hsi; Suchard, Marc A; Drummond, Alexei J

    2013-03-01

    Probabilistic inference of a phylogenetic tree from molecular sequence data is predicated on a substitution model describing the relative rates of change between character states along the tree for each site in the multiple sequence alignment. Commonly, one assumes that the substitution model is homogeneous across sites within large partitions of the alignment, assigns these partitions a priori, and then fixes their underlying substitution model to the best-fitting model from a hierarchy of named models. Here, we introduce an automatic model selection and model averaging approach within a Bayesian framework that simultaneously estimates the number of partitions, the assignment of sites to partitions, the substitution model for each partition, and the uncertainty in these selections. This new approach is implemented as an add-on to the BEAST 2 software platform. We find that this approach dramatically improves the fit of the nucleotide substitution model compared with existing approaches, and we show, using a number of example data sets, that as many as nine partitions are required to explain the heterogeneity in nucleotide substitution process across sites in a single gene analysis. In some instances, this improved modeling of the substitution process can have a measurable effect on downstream inference, including the estimated phylogeny, relative divergence times, and effective population size histories.

  18. An Integrated Model For Online shopping, Using Selective Models

    Directory of Open Access Journals (Sweden)

    Fereshteh Rabiei Dastjerdi

    Full Text Available As in traditional shopping, customer acquisition and retention are critical issues in the success of an online store. Many factors impact how, and if, customers accept online shopping. Models presented in recent years, only focus on behavioral or technolo ...

  19. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  20. Selecting global climate models for regional climate change studies

    OpenAIRE

    Pierce, David W.; Barnett, Tim P.; Santer, Benjamin D.; Gleckler, Peter J.

    2009-01-01

    Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simula...

  1. A step-by-step procedure for pH model construction in aquatic systems

    Directory of Open Access Journals (Sweden)

    A. F. Hofmann

    2007-10-01

    Full Text Available We present, by means of a simple example, a comprehensive step-by-step procedure to consistently derive a pH model of aquatic systems. As pH modeling is inherently complex, we make every step of the model generation process explicit, thus ensuring conceptual, mathematical, and chemical correctness. Summed quantities, such as total inorganic carbon and total alkalinity, and the influences of modeled processes on them are consistently derived. The model is subsequently reformulated until numerically and computationally simple dynamical solutions, like a variation of the operator splitting approach (OSA and the direct substitution approach (DSA, are obtained. As several solution methods are pointed out, connections between previous pH modelling approaches are established. The final reformulation of the system according to the DSA allows for quantification of the influences of kinetic processes on the rate of change of proton concentration in models containing multiple biogeochemical processes. These influences are calculated including the effect of re-equilibration of the system due to a set of acid-base reactions in local equilibrium. This possibility of quantifying influences of modeled processes on the pH makes the end-product of the described model generation procedure a powerful tool for understanding the internal pH dynamics of aquatic systems.

  2. Spatial Fleming-Viot models with selection and mutation

    CERN Document Server

    Dawson, Donald A

    2014-01-01

    This book constructs a rigorous framework for analysing selected phenomena in evolutionary theory of populations arising due to the combined effects of migration, selection and mutation in a spatial stochastic population model, namely the evolution towards fitter and fitter types through punctuated equilibria. The discussion is based on a number of new methods, in particular multiple scale analysis, nonlinear Markov processes and their entrance laws, atomic measure-valued evolutions and new forms of duality (for state-dependent mutation and multitype selection) which are used to prove ergodic theorems in this context and are applicable for many other questions and renormalization analysis for a variety of phenomena (stasis, punctuated equilibrium, failure of naive branching approximations, biodiversity) which occur due to the combination of rare mutation, mutation, resampling, migration and selection and make it necessary to mathematically bridge the gap (in the limit) between time and space scales.

  3. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  4. Selecting an optimal mixed products using grey relationship model

    Directory of Open Access Journals (Sweden)

    Farshad Faezy Razi

    2013-06-01

    Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.

  5. Approximation of skewed interfaces with tensor-based model reduction procedures: Application to the reduced basis hierarchical model reduction approach

    Science.gov (United States)

    Ohlberger, Mario; Smetana, Kathrin

    2016-09-01

    In this article we introduce a procedure, which allows to recover the potentially very good approximation properties of tensor-based model reduction procedures for the solution of partial differential equations in the presence of interfaces or strong gradients in the solution which are skewed with respect to the coordinate axes. The two key ideas are the location of the interface either by solving a lower-dimensional partial differential equation or by using data functions and the subsequent removal of the interface of the solution by choosing the determined interface as the lifting function of the Dirichlet boundary conditions. We demonstrate in numerical experiments for linear elliptic equations and the reduced basis-hierarchical model reduction approach that the proposed procedure locates the interface well and yields a significantly improved convergence behavior even in the case when we only consider an approximation of the interface.

  6. A topic evolution model with sentiment and selective attention

    Science.gov (United States)

    Si, Xia-Meng; Wang, Wen-Dong; Zhai, Chun-Qing; Ma, Yan

    2017-04-01

    Topic evolution is a hybrid dynamics of information propagation and opinion interaction. The dynamics of opinion interaction is inherently interwoven with the dynamics of information propagation in the network, owing to the bidirectional influences between interaction and diffusion. The degree of sentiment determines if the topic can continue to spread from this node, and the selective attention determines the information flow direction and communicatee selection. For this end, we put forward a sentiment-based mixed dynamics model with selective attention, and applied the Bayesian updating rules on it. Our model can indirectly describe the isolated users who seem isolated from a topic due to some reasons even everybody around them has heard about it. Numerical simulations show that, more insiders initially and fewer simultaneous spreaders can lessen the extremism. To promote the topic diffusion or restrain the prevailing of extremism, fewer agents with constructive motivation and more agents with no involving motivation are encouraged.

  7. Evidence accumulation as a model for lexical selection.

    Science.gov (United States)

    Anders, R; Riès, S; van Maanen, L; Alario, F X

    2015-11-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Second-order model selection in mixture experiments

    Energy Technology Data Exchange (ETDEWEB)

    Redgate, P.E.; Piepel, G.F.; Hrma, P.R.

    1992-07-01

    Full second-order models for q-component mixture experiments contain q(q+l)/2 terms, which increases rapidly as q increases. Fitting full second-order models for larger q may involve problems with ill-conditioning and overfitting. These problems can be remedied by transforming the mixture components and/or fitting reduced forms of the full second-order mixture model. Various component transformation and model reduction approaches are discussed. Data from a 10-component nuclear waste glass study are used to illustrate ill-conditioning and overfitting problems that can be encountered when fitting a full second-order mixture model. Component transformation, model term selection, and model evaluation/validation techniques are discussed and illustrated for the waste glass example.

  9. Measuring balance and model selection in propensity score methods

    NARCIS (Netherlands)

    Belitser, S.; Martens, Edwin P.; Pestman, Wiebe R.; Groenwold, Rolf H.H.; De Boer, Anthonius; Klungel, Olaf H.

    2011-01-01

    Background: Propensity score (PS) methods focus on balancing confounders between groups to estimate an unbiased treatment or exposure effect. However, there is lack of attention in actually measuring, reporting and using the information on balance, for instance for model selection. Objectives: To de

  10. Selecting crop models for decision making in wheat insurance

    NARCIS (Netherlands)

    Castaneda Vera, A.; Leffelaar, P.A.; Alvaro-Fuentes, J.; Cantero-Martinez, C.; Minguez, M.I.

    2015-01-01

    In crop insurance, the accuracy with which the insurer quantifies the actual risk is highly dependent on the availability on actual yield data. Crop models might be valuable tools to generate data on expected yields for risk assessment when no historical records are available. However, selecting a c

  11. Cross-validation criteria for SETAR model selection

    NARCIS (Netherlands)

    de Gooijer, J.G.

    2001-01-01

    Three cross-validation criteria, denoted C, C_c, and C_u are proposed for selecting the orders of a self-exciting threshold autoregressive SETAR) model when both the delay and the threshold value are unknown. The derivatioon of C is within a natural cross-validation framework. The crietion C_c is si

  12. Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2011-01-01

    ’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...

  13. Selecting crop models for decision making in wheat insurance

    NARCIS (Netherlands)

    Castaneda Vera, A.; Leffelaar, P.A.; Alvaro-Fuentes, J.; Cantero-Martinez, C.; Minguez, M.I.

    2015-01-01

    In crop insurance, the accuracy with which the insurer quantifies the actual risk is highly dependent on the availability on actual yield data. Crop models might be valuable tools to generate data on expected yields for risk assessment when no historical records are available. However, selecting a

  14. Accurate model selection of relaxed molecular clocks in bayesian phylogenetics.

    Science.gov (United States)

    Baele, Guy; Li, Wai Lok Sibon; Drummond, Alexei J; Suchard, Marc A; Lemey, Philippe

    2013-02-01

    Recent implementations of path sampling (PS) and stepping-stone sampling (SS) have been shown to outperform the harmonic mean estimator (HME) and a posterior simulation-based analog of Akaike's information criterion through Markov chain Monte Carlo (AICM), in bayesian model selection of demographic and molecular clock models. Almost simultaneously, a bayesian model averaging approach was developed that avoids conditioning on a single model but averages over a set of relaxed clock models. This approach returns estimates of the posterior probability of each clock model through which one can estimate the Bayes factor in favor of the maximum a posteriori (MAP) clock model; however, this Bayes factor estimate may suffer when the posterior probability of the MAP model approaches 1. Here, we compare these two recent developments with the HME, stabilized/smoothed HME (sHME), and AICM, using both synthetic and empirical data. Our comparison shows reassuringly that MAP identification and its Bayes factor provide similar performance to PS and SS and that these approaches considerably outperform HME, sHME, and AICM in selecting the correct underlying clock model. We also illustrate the importance of using proper priors on a large set of empirical data sets.

  15. Rank-based model selection for multiple ions quantum tomography

    Science.gov (United States)

    Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian

    2012-10-01

    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements.

  16. A double-step truncation procedure for large-scale shell-model calculations

    CERN Document Server

    Coraggio, L; Itaco, N

    2016-01-01

    We present a procedure that is helpful to reduce the computational complexity of large-scale shell-model calculations, by preserving as much as possible the role of the rejected degrees of freedom in an effective approach. Our truncation is driven first by the analysis of the effective single-particle energies of the original large-scale shell-model hamiltonian, so to locate the relevant degrees of freedom to describe a class of isotopes or isotones, namely the single-particle orbitals that will constitute a new truncated model space. The second step is to perform an unitary transformation of the original hamiltonian from its model space into the truncated one. This transformation generates a new shell-model hamiltonian, defined in a smaller model space, that retains effectively the role of the excluded single-particle orbitals. As an application of this procedure, we have chosen a realistic shell-model hamiltonian defined in a large model space, set up by seven and five proton and neutron single-particle orb...

  17. A Two-Dimensional Modeling Procedure to Estimate the Loss Equivalent Resistance Including the Saturation Effect

    Directory of Open Access Journals (Sweden)

    Rosa Ana Salas

    2013-11-01

    Full Text Available We propose a modeling procedure specifically designed for a ferrite inductor excited by a waveform in time domain. We estimate the loss resistance in the core (parameter of the electrical model of the inductor by means of a Finite Element Method in 2D which leads to significant computational advantages over the 3D model. The methodology is validated for an RM (rectangular modulus ferrite core working in the linear and the saturation regions. Excellent agreement is found between the experimental data and the computational results.

  18. Selective large-eddy simulation of hypersonic flows. Procedure to activate the filtering in unresolved regions only

    CERN Document Server

    Tordella, D; Massaglia, S; Mignone, A

    2012-01-01

    A new method for the localization of the regions where the turbulent fluctuations are unresolved is applied to the large-eddy simulation (LES) of a compressible turbulent jet with an initial Mach number equal to 5. The localization method used is called selective LES and is based on the exploitation of a scalar probe function f which represents the magnitude of the stretching-tilting term of the vorticity equation normalized with the enstrophy (Tordella et al. 2007). For a fully developed turbulent field of fluctuations, statistical analysis shows that the probability that f is larger than 2 is almost zero, and, for any given threshold, it is larger if the flow is under-resolved. By computing the spatial field of f in each instantaneous realization of the simulation it is possible to locate the regions where the magnitude of the normalized vortical stretching-tilting is anomalously high. The sub-grid model is then introduced into the governing equations in such regions only. The results of the selective LES s...

  19. Selective refinement and selection of near-native models in protein structure prediction.

    Science.gov (United States)

    Zhang, Jiong; Barz, Bogdan; Zhang, Jingfen; Xu, Dong; Kosztin, Ioan

    2015-10-01

    In recent years in silico protein structure prediction reached a level where fully automated servers can generate large pools of near-native structures. However, the identification and further refinement of the best structures from the pool of models remain problematic. To address these issues, we have developed (i) a target-specific selective refinement (SR) protocol; and (ii) molecular dynamics (MD) simulation based ranking (SMDR) method. In SR the all-atom refinement of structures is accomplished via the Rosetta Relax protocol, subject to specific constraints determined by the size and complexity of the target. The best-refined models are selected with SMDR by testing their relative stability against gradual heating through all-atom MD simulations. Through extensive testing we have found that Mufold-MD, our fully automated protein structure prediction server updated with the SR and SMDR modules consistently outperformed its previous versions.

  20. A model selection approach to analysis of variance and covariance.

    Science.gov (United States)

    Alber, Susan A; Weiss, Robert E

    2009-06-15

    An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures. Copyright (c) 2009 John Wiley & Sons, Ltd.

  1. Model selection for the extraction of movement primitives.

    Science.gov (United States)

    Endres, Dominik M; Chiovetto, Enrico; Giese, Martin A

    2013-01-01

    A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA), independent component analysis (ICA), anechoic demixing, and the time-varying synergy model (d'Avella and Tresch, 2002). However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model. We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria [Bayesian information criterion, BIC (Schwarz, 1978) and the Akaike Information Criterion (AIC) (Akaike, 1974)]. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.

  2. Model selection for the extraction of movement primitives

    Directory of Open Access Journals (Sweden)

    Dominik M Endres

    2013-12-01

    Full Text Available A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA,independent component analysis (ICA, anechoic demixing, and the time-varying synergy model. However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model.We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria (Bayesian information criterion, BIC and the Akaike Information Criterion (AIC. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.

  3. How many separable sources? Model selection in independent components analysis.

    Science.gov (United States)

    Woods, Roger P; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian.

  4. Statistical modelling in biostatistics and bioinformatics selected papers

    CERN Document Server

    Peng, Defen

    2014-01-01

    This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...

  5. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysi...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian.......Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...

  6. A performance weighting procedure for GCMs based on explicit probabilistic models and accounting for observation uncertainty

    Science.gov (United States)

    Renard, Benjamin; Vidal, Jean-Philippe

    2016-04-01

    In recent years, the climate modeling community has put a lot of effort into releasing the outputs of multimodel experiments for use by the wider scientific community. In such experiments, several structurally distinct GCMs are run using the same observed forcings (for the historical period) or the same projected forcings (for the future period). In addition, several members are produced for a single given model structure, by running each GCM with slightly different initial conditions. This multiplicity of GCM outputs offers many opportunities in terms of uncertainty quantification or GCM comparisons. In this presentation, we propose a new procedure to weight GCMs according to their ability to reproduce the observed climate. Such weights can be used to combine the outputs of several models in a way that rewards good-performing models and discards poorly-performing ones. The proposed procedure has the following main properties: 1. It is based on explicit probabilistic models describing the time series produced by the GCMs and the corresponding historical observations, 2. It can use several members whenever available, 3. It accounts for the uncertainty in observations, 4. It assigns a weight to each GCM (all weights summing up to one), 5. It can also assign a weight to the "H0 hypothesis" that all GCMs in the multimodel ensemble are not compatible with observations. The application of the weighting procedure is illustrated with several case studies including synthetic experiments, simple cases where the target GCM output is a simple univariate variable and more realistic cases where the target GCM output is a multivariate and/or a spatial variable. These case studies illustrate the generality of the procedure which can be applied in a wide range of situations, as long as the analyst is prepared to make an explicit probabilistic assumption on the target variable. Moreover, these case studies highlight several interesting properties of the weighting procedure. In

  7. A Robbins-Monro procedure for a class of models of deformation

    CERN Document Server

    Fraysse, Philippe

    2012-01-01

    The paper deals with the statistical analysis of several data sets associated with shape invariant models with different translation, height and scaling parameters. We propose to estimate these parameters together with the common shape function. Our approach extends the recent work of Bercu and Fraysse to multivariate shape invariant models. We propose a very efficient Robbins-Monro procedure for the estimation of the translation parameters and we use these estimates in order to evaluate scale parameters. The main pattern is estimated by a weighted Nadaraya-Watson estimator. We provide almost sure convergence and asymptotic normality for all estimators. Finally, we illustrate the convergence of our estimation procedure on simulated data as well as on real ECG data.

  8. A fast and systematic procedure to develop dynamic models of bioprocesses: application to microalgae cultures

    Directory of Open Access Journals (Sweden)

    J. Mailier

    2010-09-01

    Full Text Available The purpose of this paper is to report on the development of a procedure for inferring black-box, yet biologically interpretable, dynamic models of bioprocesses based on sets of measurements of a few external components (biomass, substrates, and products of interest. The procedure has three main steps: (a the determination of the number of macroscopic biological reactions linking the measured components; (b the estimation of a first reaction scheme, which has interesting mathematical properties, but might lack a biological interpretation; and (c the "projection" (or transformation of this reaction scheme onto a biologically-consistent scheme. The advantage of the method is that it allows the fast prototyping of models for the culture of microorganisms that are not well documented. The good performance of the third step of the method is demonstrated by application to an example of microalgal culture.

  9. PROPOSAL OF AN EMPIRICAL MODEL FOR SUPPLIERS SELECTION

    Directory of Open Access Journals (Sweden)

    Paulo Ávila

    2015-03-01

    Full Text Available The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, trough the literature review, there were identified five broad suppliers selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. Thereafter, a survey was elaborated and companies were contacted in order to answer which factors have more relevance in their decisions to choose the suppliers. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP method or Simple Multi-Attribute Rating Technique (SMART. The result of the research undertaken by the authors is a reference model that represents a decision making support for the suppliers/partners selection process.

  10. Application of Decision Alternatives Evaluation Models to the Selection of Computer Systems

    Directory of Open Access Journals (Sweden)

    Jackson AKPOJARO

    2013-03-01

    Full Text Available The selection of a computer system is a process dependent on many factors and irrespective of how the process proceeds; ultimately the monetary factors will play a major role. It is important to recognize the initial costs of the acquisition of the new hardware and the immediate attendance software, and also the continuing costs associated with the maintenance of hardware, software and upgrading devices that must be budgeted for to continue the infusion of viable applications. However, the selection of a given computer system from a choice set is becoming a difficult task following the proliferation of computer brands by various computer manufactures. This paper reviews different computer systems selection methodologies, draws from this background, and provides alternative models with illustrative examples to assist organizations, individual consumers or prospective buyers in arriving at specification or configurations that meet their established needs or requirements. The paper helps to educate the consumers or prospective buyers on the selection criteria and evaluation procedures for analyzing proposal submitted by vendors. The selection models adopted in this paper are evaluated using the weighted values of the different attributes submitted by vendors.

  11. A QFD-based decision making model for computer-aided design software selection

    Directory of Open Access Journals (Sweden)

    Kanika Prasad

    2016-03-01

    Full Text Available With the progress in technology and innovation in product development, the contribution of computer- aided design (CAD software in the design and manufacture of parts/products is growing on significantly. Selection of an appropriate CAD software is not a trifling task as it involves analyzing the appositeness of the available software packages to the unique requirements of the organization. Existence of a large number of CAD software vendors, presence of discordance among different hardware and software systems, and dearth of technical knowledge and experience of the decision makers further complicate the selection procedure. Moreover, there are very few published research papers related to CAD software selection, and majority of them have either employed criteria weights computed utilizing subjective judgements of the end users or floundered to incorporate the voice of customers in the decision making process. Quality function deployment (QFD is a well-known technique for determining the relative importance of customers’ defined criteria for selection of any product or service. Therefore, this paper deals with design and development of a QFD-based decision making model in Visual BASIC 6.0 for selection of CAD software for manufacturing organizations. In order to demonstrate the applicability and potentiality of the developed model in the form of a software prototype, two illustrative examples are also provided.

  12. Supplier Selection in Virtual Enterprise Model of Manufacturing Supply Network

    Science.gov (United States)

    Kaihara, Toshiya; Opadiji, Jayeola F.

    The market-based approach to manufacturing supply network planning focuses on the competitive attitudes of various enterprises in the network to generate plans that seek to maximize the throughput of the network. It is this competitive behaviour of the member units that we explore in proposing a solution model for a supplier selection problem in convergent manufacturing supply networks. We present a formulation of autonomous units of the network as trading agents in a virtual enterprise network interacting to deliver value to market consumers and discuss the effect of internal and external trading parameters on the selection of suppliers by enterprise units.

  13. A model-based approach to selection of tag SNPs

    Directory of Open Access Journals (Sweden)

    Sun Fengzhu

    2006-06-01

    Full Text Available Abstract Background Single Nucleotide Polymorphisms (SNPs are the most common type of polymorphisms found in the human genome. Effective genetic association studies require the identification of sets of tag SNPs that capture as much haplotype information as possible. Tag SNP selection is analogous to the problem of data compression in information theory. According to Shannon's framework, the optimal tag set maximizes the entropy of the tag SNPs subject to constraints on the number of SNPs. This approach requires an appropriate probabilistic model. Compared to simple measures of Linkage Disequilibrium (LD, a good model of haplotype sequences can more accurately account for LD structure. It also provides a machinery for the prediction of tagged SNPs and thereby to assess the performances of tag sets through their ability to predict larger SNP sets. Results Here, we compute the description code-lengths of SNP data for an array of models and we develop tag SNP selection methods based on these models and the strategy of entropy maximization. Using data sets from the HapMap and ENCODE projects, we show that the hidden Markov model introduced by Li and Stephens outperforms the other models in several aspects: description code-length of SNP data, information content of tag sets, and prediction of tagged SNPs. This is the first use of this model in the context of tag SNP selection. Conclusion Our study provides strong evidence that the tag sets selected by our best method, based on Li and Stephens model, outperform those chosen by several existing methods. The results also suggest that information content evaluated with a good model is more sensitive for assessing the quality of a tagging set than the correct prediction rate of tagged SNPs. Besides, we show that haplotype phase uncertainty has an almost negligible impact on the ability of good tag sets to predict tagged SNPs. This justifies the selection of tag SNPs on the basis of haplotype

  14. Integration of Multiple Genomic Data Sources in a Bayesian Cox Model for Variable Selection and Prediction.

    Science.gov (United States)

    Treppmann, Tabea; Ickstadt, Katja; Zucknick, Manuela

    2017-01-01

    Bayesian variable selection becomes more and more important in statistical analyses, in particular when performing variable selection in high dimensions. For survival time models and in the presence of genomic data, the state of the art is still quite unexploited. One of the more recent approaches suggests a Bayesian semiparametric proportional hazards model for right censored time-to-event data. We extend this model to directly include variable selection, based on a stochastic search procedure within a Markov chain Monte Carlo sampler for inference. This equips us with an intuitive and flexible approach and provides a way for integrating additional data sources and further extensions. We make use of the possibility of implementing parallel tempering to help improve the mixing of the Markov chains. In our examples, we use this Bayesian approach to integrate copy number variation data into a gene-expression-based survival prediction model. This is achieved by formulating an informed prior based on copy number variation. We perform a simulation study to investigate the model's behavior and prediction performance in different situations before applying it to a dataset of glioblastoma patients and evaluating the biological relevance of the findings.

  15. Manual ventilation and open suction procedures contribute to negative pressures in a mechanical lung model

    Science.gov (United States)

    Nakstad, Espen Rostrup; Opdahl, Helge; Heyerdahl, Fridtjof; Borchsenius, Fredrik; Skjønsberg, Ole Henning

    2017-01-01

    Introduction Removal of pulmonary secretions in mechanically ventilated patients usually requires suction with closed catheter systems or flexible bronchoscopes. Manual ventilation is occasionally performed during such procedures if clinicians suspect inadequate ventilation. Suctioning can also be performed with the ventilator entirely disconnected from the endotracheal tube (ETT). The aim of this study was to investigate if these two procedures generate negative airway pressures, which may contribute to atelectasis. Methods The effects of device insertion and suctioning in ETTs were examined in a mechanical lung model with a pressure transducer inserted distal to ETTs of 9 mm, 8 mm and 7 mm internal diameter (ID). A 16 Fr bronchoscope and 12, 14 and 16 Fr suction catheters were used at two different vacuum levels during manual ventilation and with the ETTs disconnected. Results During manual ventilation with ETTs of 9 mm, 8 mm and 7 mm ID, and bronchoscopic suctioning at moderate suction level, peak pressure (PPEAK) dropped from 23, 22 and 24.5 cm H2O to 16, 16 and 15 cm H2O, respectively. Maximum suction reduced PPEAK to 20, 17 and 11 cm H2O, respectively, and the end-expiratory pressure fell from 5, 5.5 and 4.5 cm H2O to –2, –6 and –17 cm H2O. Suctioning through disconnected ETTs (open suction procedure) gave negative model airway pressures throughout the duration of the procedures. Conclusions Manual ventilation and open suction procedures induce negative end-expiratory pressure during endotracheal suctioning, which may have clinical implications in patients who need high PEEP (positive end-expiratory pressure). PMID:28725445

  16. Geostatistical Procedures for Developing Three-Dimensional Aquifer Models from Drillers' Logs

    Science.gov (United States)

    Bohling, G.; Helm, C.

    2013-12-01

    The Hydrostratigraphic Drilling Record Assessment (HyDRA) project is developing procedures for employing the vast but highly qualitative hydrostratigraphic information contained in drillers' logs in the development of quantitative three-dimensional (3D) depictions of subsurface properties for use in flow and transport models to support groundwater management practices. One of the project's objectives is to develop protocols for 3D interpolation of lithological data from drillers' logs, properly accounting for the categorical nature of these data. This poster describes the geostatistical procedures developed to accomplish this objective. Using a translation table currently containing over 62,000 unique sediment descriptions encountered during the transcription of over 15,000 logs in the Kansas High Plains aquifer, the sediment descriptions are translated into 71 standardized terms, which are then mapped into a small number of categories associated with different representative property (e.g., hydraulic conductivity [K]) values. Each log is partitioned into regular intervals and the proportion of each K category within each interval is computed. To properly account for their compositional nature, a logratio transform is applied to the proportions. The transformed values are then kriged to the 3D model grid and backtransformed to determine the proportion of each category within each model cell. Various summary measures can then be computed from the proportions, including a proportion-weighted average K and an entropy measure representing the degree of mixing of categories within each cell. We also describe a related cross-validation procedure for assessing log quality.

  17. Detecting temporal trends in species assemblages with bootstrapping procedures and hierarchical models

    Science.gov (United States)

    Gotelli, Nicholas J.; Dorazio, Robert M.; Ellison, Aaron M.; Grossman, Gary D.

    2010-01-01

    Quantifying patterns of temporal trends in species assemblages is an important analytical challenge in community ecology. We describe methods of analysis that can be applied to a matrix of counts of individuals that is organized by species (rows) and time-ordered sampling periods (columns). We first developed a bootstrapping procedure to test the null hypothesis of random sampling from a stationary species abundance distribution with temporally varying sampling probabilities. This procedure can be modified to account for undetected species. We next developed a hierarchical model to estimate species-specific trends in abundance while accounting for species-specific probabilities of detection. We analysed two long-term datasets on stream fishes and grassland insects to demonstrate these methods. For both assemblages, the bootstrap test indicated that temporal trends in abundance were more heterogeneous than expected under the null model. We used the hierarchical model to estimate trends in abundance and identified sets of species in each assemblage that were steadily increasing, decreasing or remaining constant in abundance over more than a decade of standardized annual surveys. Our methods of analysis are broadly applicable to other ecological datasets, and they represent an advance over most existing procedures, which do not incorporate effects of incomplete sampling and imperfect detection.

  18. The evolutionary forces maintaining a wild polymorphism of Littorina saxatilis: model selection by computer simulations.

    Science.gov (United States)

    Pérez-Figueroa, A; Cruz, F; Carvajal-Rodríguez, A; Rolán-Alvarez, E; Caballero, A

    2005-01-01

    Two rocky shore ecotypes of Littorina saxatilis from north-west Spain live at different shore levels and habitats and have developed an incomplete reproductive isolation through size assortative mating. The system is regarded as an example of sympatric ecological speciation. Several experiments have indicated that different evolutionary forces (migration, assortative mating and habitat-dependent selection) play a role in maintaining the polymorphism. However, an assessment of the combined contributions of these forces supporting the observed pattern in the wild is absent. A model selection procedure using computer simulations was used to investigate the contribution of the different evolutionary forces towards the maintenance of the polymorphism. The agreement between alternative models and experimental estimates for a number of parameters was quantified by a least square method. The results of the analysis show that the fittest evolutionary model for the observed polymorphism is characterized by a high gene flow, intermediate-high reproductive isolation between ecotypes, and a moderate to strong selection against the nonresident ecotypes on each shore level. In addition, a substantial number of additive loci contributing to the selected trait and a narrow hybrid definition with respect to the phenotype are scenarios that better explain the polymorphism, whereas the ecotype fitnesses at the mid-shore, the level of phenotypic plasticity, and environmental effects are not key parameters.

  19. Models of cultural niche construction with selection and assortative mating.

    Science.gov (United States)

    Creanza, Nicole; Fogarty, Laurel; Feldman, Marcus W

    2012-01-01

    Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.

  20. Models of cultural niche construction with selection and assortative mating.

    Directory of Open Access Journals (Sweden)

    Nicole Creanza

    Full Text Available Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.

  1. Bayesian nonparametric centered random effects models with variable selection.

    Science.gov (United States)

    Yang, Mingan

    2013-03-01

    In a linear mixed effects model, it is common practice to assume that the random effects follow a parametric distribution such as a normal distribution with mean zero. However, in the case of variable selection, substantial violation of the normality assumption can potentially impact the subset selection and result in poor interpretation and even incorrect results. In nonparametric random effects models, the random effects generally have a nonzero mean, which causes an identifiability problem for the fixed effects that are paired with the random effects. In this article, we focus on a Bayesian method for variable selection. We characterize the subject-specific random effects nonparametrically with a Dirichlet process and resolve the bias simultaneously. In particular, we propose flexible modeling of the conditional distribution of the random effects with changes across the predictor space. The approach is implemented using a stochastic search Gibbs sampler to identify subsets of fixed effects and random effects to be included in the model. Simulations are provided to evaluate and compare the performance of our approach to the existing ones. We then apply the new approach to a real data example, cross-country and interlaboratory rodent uterotrophic bioassay.

  2. QOS Aware Formalized Model for Semantic Web Service Selection

    Directory of Open Access Journals (Sweden)

    Divya Sachan

    2014-10-01

    Full Text Available Selecting the most relevant Web Service according to a client requirement is an onerous task, as innumerous number of functionally same Web Services(WS are listed in UDDI registry. WS are functionally same but their Quality and performance varies as per service providers. A web Service Selection Process involves two major points: Recommending the pertinent Web Service and avoiding unjustifiable web service. The deficiency in keyword based searching is that it doesn’t handle the client request accurately as keyword may have ambiguous meaning on different scenarios. UDDI and search engines all are based on keyword search, which are lagging behind on pertinent Web service selection. So the search mechanism must be incorporated with the Semantic behavior of Web Services. In order to strengthen this approach, the proposed model is incorporated with Quality of Services (QoS based Ranking of semantic web services.

  3. Modelling autophagy selectivity by receptor clustering on peroxisomes

    CERN Document Server

    Brown, Aidan I

    2016-01-01

    When subcellular organelles are degraded by autophagy, typically some, but not all, of each targeted organelle type are degraded. Autophagy selectivity must not only select the correct type of organelle, but must discriminate between individual organelles of the same kind. In the context of peroxisomes, we use computational models to explore the hypothesis that physical clustering of autophagy receptor proteins on the surface of each organelle provides an appropriate all-or-none signal for degradation. The pexophagy receptor proteins NBR1 and p62 are well characterized, though only NBR1 is essential for pexophagy (Deosaran {\\em et al.}, 2013). Extending earlier work by addressing the initial nucleation of NBR1 clusters on individual peroxisomes, we find that larger peroxisomes nucleate NBR1 clusters first and lose them due to competitive coarsening last, resulting in significant size-selectivity favouring large peroxisomes. This effect can explain the increased catalase signal that results from experimental s...

  4. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  5. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  6. TWO-PROCEDURE OF MODEL RELIABILITY-BASED OPTIMIZATION FOR WATER DISTRIBUTION SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Recently, considerable emphasis has been laid to the reliability-based optimization model for water distribution systems. But considerable computational effort is needed to determine the reliability-based optimal design of large networks, even of mid-sized networks. In this paper, a new methodology is presented for the reliability analysis for water distribution systems. This methodology consists of two procedures. The first is that the optimal design is constrained only by the pressure heads at demand nodes, done in GRG2. Because the reliability constrains are removed from the optimal problem, a number of simulations do not need to be conducted, so the computer time is greatly decreased. Then, the second procedure is a linear optimal search procedure. In this linear procedure, the optimal results obtained by GRG2 are adjusted by the reliability constrains. The results are a group of commercial diameters of pipes and the constraints of pressure heads and reliability at nodes are satisfied. Therefore, the computer burden is significantly decreased, and the reliability-based optimization is of more practical use.

  7. Stationary solutions for metapopulation Moran models with mutation and selection

    Science.gov (United States)

    Constable, George W. A.; McKane, Alan J.

    2015-03-01

    We construct an individual-based metapopulation model of population genetics featuring migration, mutation, selection, and genetic drift. In the case of a single "island," the model reduces to the Moran model. Using the diffusion approximation and time-scale separation arguments, an effective one-variable description of the model is developed. The effective description bears similarities to the well-mixed Moran model with effective parameters that depend on the network structure and island sizes, and it is amenable to analysis. Predictions from the reduced theory match the results from stochastic simulations across a range of parameters. The nature of the fast-variable elimination technique we adopt is further studied by applying it to a linear system, where it provides a precise description of the slow dynamics in the limit of large time-scale separation.

  8. Predicting artificailly drained areas by means of selective model ensemble

    DEFF Research Database (Denmark)

    Møller, Anders Bjørn; Beucher, Amélie; Iversen, Bo Vangsø

    . The approaches employed include decision trees, discriminant analysis, regression models, neural networks and support vector machines amongst others. Several models are trained with each method, using variously the original soil covariates and principal components of the covariates. With a large ensemble...... out since the mid-19th century, and it has been estimated that half of the cultivated area is artificially drained (Olesen, 2009). A number of machine learning approaches can be used to predict artificially drained areas in geographic space. However, instead of choosing the most accurate model....... The study aims firstly to train a large number of models to predict the extent of artificially drained areas using various machine learning approaches. Secondly, the study will develop a method for selecting the models, which give a good prediction of artificially drained areas, when used in conjunction...

  9. Model Selection Framework for Graph-based data

    CERN Document Server

    Caceres, Rajmonda S; Schmidt, Matthew C; Miller, Benjamin A; Campbell, William M

    2016-01-01

    Graphs are powerful abstractions for capturing complex relationships in diverse application settings. An active area of research focuses on theoretical models that define the generative mechanism of a graph. Yet given the complexity and inherent noise in real datasets, it is still very challenging to identify the best model for a given observed graph. We discuss a framework for graph model selection that leverages a long list of graph topological properties and a random forest classifier to learn and classify different graph instances. We fully characterize the discriminative power of our approach as we sweep through the parameter space of two generative models, the Erdos-Renyi and the stochastic block model. We show that our approach gets very close to known theoretical bounds and we provide insight on which topological features play a critical discriminating role.

  10. Feature selection and survival modeling in The Cancer Genome Atlas

    Directory of Open Access Journals (Sweden)

    Kim H

    2013-09-01

    Full Text Available Hyunsoo Kim,1 Markus Bredel2 1Department of Pathology, The University of Alabama at Birmingham, Birmingham, AL, USA; 2Department of Radiation Oncology, and Comprehensive Cancer Center, The University of Alabama at Birmingham, Birmingham, AL, USA Purpose: Personalized medicine is predicated on the concept of identifying subgroups of a common disease for better treatment. Identifying biomarkers that predict disease subtypes has been a major focus of biomedical science. In the era of genome-wide profiling, there is controversy as to the optimal number of genes as an input of a feature selection algorithm for survival modeling. Patients and methods: The expression profiles and outcomes of 544 patients were retrieved from The Cancer Genome Atlas. We compared four different survival prediction methods: (1 1-nearest neighbor (1-NN survival prediction method; (2 random patient selection method and a Cox-based regression method with nested cross-validation; (3 least absolute shrinkage and selection operator (LASSO optimization using whole-genome gene expression profiles; or (4 gene expression profiles of cancer pathway genes. Results: The 1-NN method performed better than the random patient selection method in terms of survival predictions, although it does not include a feature selection step. The Cox-based regression method with LASSO optimization using whole-genome gene expression data demonstrated higher survival prediction power than the 1-NN method, but was outperformed by the same method when using gene expression profiles of cancer pathway genes alone. Conclusion: The 1-NN survival prediction method may require more patients for better performance, even when omitting censored data. Using preexisting biological knowledge for survival prediction is reasonable as a means to understand the biological system of a cancer, unless the analysis goal is to identify completely unknown genes relevant to cancer biology. Keywords: brain, feature selection

  11. Ensemble feature selection integrating elitist roles and quantum game model

    Institute of Scientific and Technical Information of China (English)

    Weiping Ding; Jiandong Wang; Zhijin Guan; Quan Shi

    2015-01-01

    To accelerate the selection process of feature subsets in the rough set theory (RST), an ensemble elitist roles based quantum game (EERQG) algorithm is proposed for feature selec-tion. Firstly, the multilevel elitist roles based dynamics equilibrium strategy is established, and both immigration and emigration of elitists are able to be self-adaptive to balance between exploration and exploitation for feature selection. Secondly, the utility matrix of trust margins is introduced to the model of multilevel elitist roles to enhance various elitist roles’ performance of searching the optimal feature subsets, and the win-win utility solutions for feature selec-tion can be attained. Meanwhile, a novel ensemble quantum game strategy is designed as an intriguing exhibiting structure to perfect the dynamics equilibrium of multilevel elitist roles. Final y, the en-semble manner of multilevel elitist roles is employed to achieve the global minimal feature subset, which wil greatly improve the fea-sibility and effectiveness. Experiment results show the proposed EERQG algorithm has superiority compared to the existing feature selection algorithms.

  12. Transitions in a genotype selection model driven by coloured noises

    Institute of Scientific and Technical Information of China (English)

    Wang Can-Jun; Mei Dong-Cheng

    2008-01-01

    This paper investigates a genotype selection model subjected to both a multiplicative coloured noise and an additive coloured noise with different correlation time T1 and T2 by means of the numerical technique.By directly simulating the Langevin Equation,the following results are obtained.(1) The multiplicative coloured noise dominates,however,the effect of the additive coloured noise is not neglected in the practical gene selection process.The selection rate μ decides that the selection is propitious to gene A haploid or gene B haploid.(2) The additive coloured noise intensity α and the correlation time T2 play opposite roles.It is noted that α and T2 can not separate the single peak,while αcan make the peak disappear and T2 can make the peak be sharp.(3) The multiplicative coloured noise intensity D and the correlation time T1 can induce phase transition,at the same time they play opposite roles and the reentrance phenomenon appears.In this case,it is easy to select one type haploid from the group with increasing D and decreasing T1.

  13. Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    2015-01-01

    We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves...

  14. Selection between Linear Factor Models and Latent Profile Models Using Conditional Covariances

    Science.gov (United States)

    Halpin, Peter F.; Maraun, Michael D.

    2010-01-01

    A method for selecting between K-dimensional linear factor models and (K + 1)-class latent profile models is proposed. In particular, it is shown that the conditional covariances of observed variables are constant under factor models but nonlinear functions of the conditioning variable under latent profile models. The performance of a convenient…

  15. Selection between Linear Factor Models and Latent Profile Models Using Conditional Covariances

    Science.gov (United States)

    Halpin, Peter F.; Maraun, Michael D.

    2010-01-01

    A method for selecting between K-dimensional linear factor models and (K + 1)-class latent profile models is proposed. In particular, it is shown that the conditional covariances of observed variables are constant under factor models but nonlinear functions of the conditioning variable under latent profile models. The performance of a convenient…

  16. A Form—Correcting System of Chinese Characters Using a Model of Correcting Procedures of Calligraphists

    Institute of Scientific and Technical Information of China (English)

    曾建超; HidehikoSanada; 等

    1995-01-01

    A support system for form-correction of Chinese Characters is developed based upon a generation model SAM,and its feasibility is evaluated.SAM is excellent as a model for generating Chinese characters,but it is difficult to determine appropriate parameters because the use of calligraphic knowledge is needed.by noticing that calligraphic knowledge of calligraphists is included in their corrective actions, we adopt a strategy to acquire calligraphic knowledge by monitoring,recording and analyzing corrective actions of calligraphists,and try to realize an environment under which calligraphists can easily make corrections to character forms and which can record corrective actions of calligraphists without interfering with them.In this paper,we first construct a model of correcting procedures of calligraphists,which is composed of typical correcting procedures that are acquired by extensively observing their corrective actions and interviewing them,and develop a form-correcting system for brush-written Chinese characters by using the model.Secondly,through actual correcting experiments,we demonstrate that parameters within SAM can be easily corrected at the level of character patterns by our system,and show that it is effective and easy for calligraphists to be used by evaluating effectiveness of the correcting model,sufficiency of its functions and execution speed.

  17. Fusion of range camera and photogrammetry: a systematic procedure for improving 3-D models metric accuracy.

    Science.gov (United States)

    Guidi, G; Beraldin, J A; Ciofi, S; Atzeni, C

    2003-01-01

    The generation of three-dimensional (3-D) digital models produced by optical technologies in some cases involves metric errors. This happens when small high-resolution 3-D images are assembled together in order to model a large object. In some applications, as for example 3-D modeling of Cultural Heritage, the problem of metric accuracy is a major issue and no methods are currently available for enhancing it. The authors present a procedure by which the metric reliability of the 3-D model, obtained through iterative alignments of many range maps, can be guaranteed to a known acceptable level. The goal is the integration of the 3-D range camera system with a close range digital photogrammetry technique. The basic idea is to generate a global coordinate system determined by the digital photogrammetric procedure, measuring the spatial coordinates of optical targets placed around the object to be modeled. Such coordinates, set as reference points, allow the proper rigid motion of few key range maps, including a portion of the targets, in the global reference system defined by photogrammetry. The other 3-D images are normally aligned around these locked images with usual iterative algorithms. Experimental results on an anthropomorphic test object, comparing the conventional and the proposed alignment method, are finally reported.

  18. Modeling selective attention using a neuromorphic analog VLSI device.

    Science.gov (United States)

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  19. Model Order Selection Rules for Covariance Structure Classification in Radar

    Science.gov (United States)

    Carotenuto, Vincenzo; De Maio, Antonio; Orlando, Danilo; Stoica, Petre

    2017-10-01

    The adaptive classification of the interference covariance matrix structure for radar signal processing applications is addressed in this paper. This represents a key issue because many detection architectures are synthesized assuming a specific covariance structure which may not necessarily coincide with the actual one due to the joint action of the system and environment uncertainties. The considered classification problem is cast in terms of a multiple hypotheses test with some nested alternatives and the theory of Model Order Selection (MOS) is exploited to devise suitable decision rules. Several MOS techniques, such as the Akaike, Takeuchi, and Bayesian information criteria are adopted and the corresponding merits and drawbacks are discussed. At the analysis stage, illustrating examples for the probability of correct model selection are presented showing the effectiveness of the proposed rules.

  20. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  1. 甄选程序公平感的来源及其负面溢出效应%Antecedents and Negative Spillover Effect of Selection Procedural Justice

    Institute of Scientific and Technical Information of China (English)

    朱其权; 龙立荣

    2012-01-01

    如何提升应聘者的公平感、降低拒聘带来的负面溢出效应是招聘实践中亟待解决的问题.基于甄选过程采取纵向研究设计,以416名应届毕业生求职者为样本,分别在应聘前、应聘后结果知晓前和结果知晓后3个时点收集数据,运用层次回归分析方法,考察落选者报复意愿的预测变量、甄选程序公平感的来源及其中介效应.研究结果表明,工作吸引力和雇佣期望对应聘者的甄选程序公平感有显著的正向预测作用,消极情感的预测作用不显著;工作吸引力、甄选程序公平对落选者的报复意愿有显著的负向预测作用,雇佣期望和消极情感对报复意愿有正向预测作用;甄选程序公平的中介效应不显著;良好的工作设计和合理公平的招聘过程有助于降低招聘的负面溢出效应.%It is a critical practical issue to explore how to promote applicants' justice and reduce the negative spillover effects of rejection. This study explored the predictors of selection procedural justice and retaliatory intention of rejecters, and the mediation effect of selection procedural justice. Data has been collected from 416 graduate job seekers with a longitudinal questionnaire survey design in three time point; before selection, after selection but not known result, and after known result. Hierarchical regression analysis revealed that: Job attractiveness and hiring expectancy predicted selection procedural justice perception of job seekers significantly, whereas negative affection was not supported; Job attractiveness and selection procedural justice exerted negative influence on retaliatory intention of rejecters, while hiring expectancy and negative affection had positive influence; the mediation effect of selection procedural justice was not supported. The result implied that appropriate job design and sound selection process would reduce the negative spillover effects of recruitment.

  2. Structure and selection in an autocatalytic binary polymer model

    DEFF Research Database (Denmark)

    Tanaka, Shinpei; Fellermann, Harold; Rasmussen, Steen

    2014-01-01

    An autocatalytic binary polymer system is studied as an abstract model for a chemical reaction network capable to evolve. Due to autocatalysis, long polymers appear spontaneously and their concentration is shown to be maintained at the same level as that of monomers. When the reaction starts from....... Stability, fluctuations, and dynamic selection mechanisms are investigated for the involved self-organizing processes. Copyright (C) EPLA, 2014......An autocatalytic binary polymer system is studied as an abstract model for a chemical reaction network capable to evolve. Due to autocatalysis, long polymers appear spontaneously and their concentration is shown to be maintained at the same level as that of monomers. When the reaction starts from...

  3. Velocity selection in the symmetric model of dendritic crystal growth

    Science.gov (United States)

    Barbieri, Angelo; Hong, Daniel C.; Langer, J. S.

    1987-01-01

    An analytic solution of the problem of velocity selection in a fully nonlocal model of dendritic crystal growth is presented. The analysis uses a WKB technique to derive and evaluate a solvability condition for the existence of steady-state needle-like solidification fronts in the limit of small under-cooling Delta. For the two-dimensional symmetric model with a capillary anisotropy of strength alpha, it is found that the velocity is proportional to (Delta to the 4th) times (alpha exp 7/4). The application of the method in three dimensions is also described.

  4. A simple application of FIC to model selection

    CERN Document Server

    Wiggins, Paul A

    2015-01-01

    We have recently proposed a new information-based approach to model selection, the Frequentist Information Criterion (FIC), that reconciles information-based and frequentist inference. The purpose of this current paper is to provide a simple example of the application of this criterion and a demonstration of the natural emergence of model complexities with both AIC-like ($N^0$) and BIC-like ($\\log N$) scaling with observation number $N$. The application developed is deliberately simplified to make the analysis analytically tractable.

  5. Small populations corrections for selection-mutation models

    CERN Document Server

    Jabin, Pierre-Emmanuel

    2012-01-01

    We consider integro-differential models describing the evolution of a population structured by a quantitative trait. Individuals interact competitively, creating a strong selection pressure on the population. On the other hand, mutations are assumed to be small. Following the formalism of Diekmann, Jabin, Mischler, and Perthame, this creates concentration phenomena, typically consisting in a sum of Dirac masses slowly evolving in time. We propose a modification to those classical models that takes the effect of small populations into accounts and corrects some abnormal behaviours.

  6. Process chain modeling and selection in an additive manufacturing context

    DEFF Research Database (Denmark)

    Thompson, Mary Kathryn; Stolfi, Alessandro; Mischkot, Michael

    2016-01-01

    can compete with traditional process chains for small production runs. Combining both types of technology added cost but no benefit in this case. The new process chain model can be used to explain the results and support process selection, but process chain prototyping is still important for rapidly......This paper introduces a new two-dimensional approach to modeling manufacturing process chains. This approach is used to consider the role of additive manufacturing technologies in process chains for a part with micro scale features and no internal geometry. It is shown that additive manufacturing...

  7. Selecting, weeding, and weighting biased climate model ensembles

    Science.gov (United States)

    Jackson, C. S.; Picton, J.; Huerta, G.; Nosedal Sanchez, A.

    2012-12-01

    In the Bayesian formulation, the "log-likelihood" is a test statistic for selecting, weeding, or weighting climate model ensembles with observational data. This statistic has the potential to synthesize the physical and data constraints on quantities of interest. One of the thorny issues for formulating the log-likelihood is how one should account for biases. While in the past we have included a generic discrepancy term, not all biases affect predictions of quantities of interest. We make use of a 165-member ensemble CAM3.1/slab ocean climate models with different parameter settings to think through the issues that are involved with predicting each model's sensitivity to greenhouse gas forcing given what can be observed from the base state. In particular we use multivariate empirical orthogonal functions to decompose the differences that exist among this ensemble to discover what fields and regions matter to the model's sensitivity. We find that the differences that matter are a small fraction of the total discrepancy. Moreover, weighting members of the ensemble using this knowledge does a relatively poor job of adjusting the ensemble mean toward the known answer. This points out the shortcomings of using weights to correct for biases in climate model ensembles created by a selection process that does not emphasize the priorities of your log-likelihood.

  8. A comparison of nonlinear mixed models and response to selection of tick-infestation on lambs

    Science.gov (United States)

    2017-01-01

    Tick-borne fever (TBF) is stated as one of the main disease challenges in Norwegian sheep farming during the grazing season. TBF is caused by the bacterium Anaplasma phagocytophilum that is transmitted by the tick Ixodes ricinus. A sustainable strategy to control tick-infestation is to breed for genetically robust animals. In order to use selection to genetically improve traits we need reliable estimates of genetic parameters. The standard procedures for estimating variance components assume a Gaussian distribution of the data. However, tick-count data is a discrete variable and, thus, standard procedures using linear models may not be appropriate. Thus, the objectives of this study were twofold: 1) to compare four alternative non-linear models: Poisson, negative binomial, zero-inflated Poisson and zero-inflated negative binomial based on their goodness of fit for quantifying genetic variation, as well as heritability for tick-count and 2) to investigate potential response to selection against tick-count based on truncation selection given the estimated genetic parameters from the best fit model. Our results showed that zero-inflated Poisson was the most parsimonious model for the analysis of tick count data. The resulting estimates of variance components and high heritability (0.32) led us to conclude that genetic determinism is relevant on tick count. A reduction of the breeding values for tick-count by one sire-dam genetic standard deviation on the liability scale will reduce the number of tick counts below an average of 1. An appropriate breeding scheme could control tick-count and, as a consequence, probably reduce TBF in sheep. PMID:28257433

  9. Bayesian Model Selection with Network Based Diffusion Analysis.

    Science.gov (United States)

    Whalen, Andrew; Hoppitt, William J E

    2016-01-01

    A number of recent studies have used Network Based Diffusion Analysis (NBDA) to detect the role of social transmission in the spread of a novel behavior through a population. In this paper we present a unified framework for performing NBDA in a Bayesian setting, and demonstrate how the Watanabe Akaike Information Criteria (WAIC) can be used for model selection. We present a specific example of applying this method to Time to Acquisition Diffusion Analysis (TADA). To examine the robustness of this technique, we performed a large scale simulation study and found that NBDA using WAIC could recover the correct model of social transmission under a wide range of cases, including under the presence of random effects, individual level variables, and alternative models of social transmission. This work suggests that NBDA is an effective and widely applicable tool for uncovering whether social transmission underpins the spread of a novel behavior, and may still provide accurate results even when key model assumptions are relaxed.

  10. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  11. An Introduction to Model Selection: Tools and Algorithms

    Directory of Open Access Journals (Sweden)

    Sébastien Hélie

    2006-03-01

    Full Text Available Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans prevents the use of Popper’s falsification principle (which is the norm in other sciences. Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood, as well as five methods to compare model fits (the likelihood ratio test, Akaike’s information criterion, the Bayesian information criterion, bootstrapping and cross-validation, are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed.

  12. Selection of key terrain attributes for SOC model

    DEFF Research Database (Denmark)

    Greve, Mogens Humlekrog; Adhikari, Kabindra; Chellasamy, Menaka

    was selected, total 2,514,820 data mining models were constructed by 71 differences grid from 12m to 2304m and 22 attributes, 21 attributes derived by DTM and the original elevation. Relative importance and usage of each attributes in every model were calculated. Comprehensive impact rates of each attribute...... (standh) are the first three key terrain attributes in 5-attributes-model in all resolutions, the rest 2 of 5 attributes are Normal High (NormalH) and Valley Depth (Vall_depth) at the resolution finer than 40m, and Elevation and Channel Base (Chnl_base) coarser than 40m. The models at pixels size at 88m......As an important component of the global carbon pool, soil organic carbon (SOC) plays an important role in the global carbon cycle. SOC pool is the basic information to carry out global warming research, and needs to sustainable use of land resources. Digital terrain attributes are often use...

  13. Regression Test-Selection Technique Using Component Model Based Modification: Code to Test Traceability

    Directory of Open Access Journals (Sweden)

    Ahmad A. Saifan

    2016-04-01

    Full Text Available Regression testing is a safeguarding procedure to validate and verify adapted software, and guarantee that no errors have emerged. However, regression testing is very costly when testers need to re-execute all the test cases against the modified software. This paper proposes a new approach in regression test selection domain. The approach is based on meta-models (test models and structured models to decrease the number of test cases to be used in the regression testing process. The approach has been evaluated using three Java applications. To measure the effectiveness of the proposed approach, we compare the results using the re-test to all approaches. The results have shown that our approach reduces the size of test suite without negative impact on the effectiveness of the fault detection.

  14. Unifying models for X-ray selected and Radio selected BL Lac Objects

    CERN Document Server

    Fossati, G; Ghisellini, G; Maraschi, L; Brera-Merate, O A

    1997-01-01

    We discuss alternative interpretations of the differences in the Spectral Energy Distributions (SEDs) of BL Lacs found in complete Radio or X-ray surveys. A large body of observations in different bands suggests that the SEDs of BL Lac objects appearing in X-ray surveys differ from those appearing in radio surveys mainly in having a (synchrotron) spectral cut-off (or break) at much higher frequency. In order to explain the different properties of radio and X-ray selected BL Lacs Giommi and Padovani proposed a model based on a common radio luminosity function. At each radio luminosity, objects with high frequency spectral cut-offs are assumed to be a minority. Nevertheless they dominate the X-ray selected population due to the larger X-ray-to-radio-flux ratio. An alternative model explored here (reminiscent of the orientation models previously proposed) is that the X-ray luminosity function is "primary" and that at each X-ray luminosity a minority of objects has larger radio-to-X-ray flux ratio. The prediction...

  15. Equation of State Selection for Organic Rankine Cycle Modeling Under Uncertainty

    DEFF Research Database (Denmark)

    Frutiger, Jerome; O'Connell, John; Abildskov, Jens

    In recent years there has been a great interest in the design and selection of working fluids for low-temperature Organic Rankine Cycles (ORC), to efficiently produce electrical power from waste heat from chemical engineering applications, as well as from renewable energy sources such as biomass...... cycle, all influence the model output uncertainty. The procedure is highlighted for an ORC for with a low-temperature heat source from exhaust gas from a marine diesel engine.[1] Saleh B, Koglbauer G, Wendland M, Fischer J. Working fluids for lowtemperature organic Rankine cycles. Energy 2007......;32:1210–21.[2] Frutiger J, Andreasen JG, Liu W, Spliethoff H, Haglind F, Abildskov J, Sin G. Working fluid selection for organic Rankine cycles - impact of uncertainty of fluid properties. Energy (accepted s.t. revision)....

  16. Bayesian model selection applied to artificial neural networks used for water resources modeling

    Science.gov (United States)

    Kingston, Greer B.; Maier, Holger R.; Lambert, Martin F.

    2008-04-01

    Artificial neural networks (ANNs) have proven to be extremely valuable tools in the field of water resources engineering. However, one of the most difficult tasks in developing an ANN is determining the optimum level of complexity required to model a given problem, as there is no formal systematic model selection method. This paper presents a Bayesian model selection (BMS) method for ANNs that provides an objective approach for comparing models of varying complexity in order to select the most appropriate ANN structure. The approach uses Markov Chain Monte Carlo posterior simulations to estimate the evidence in favor of competing models and, in this study, three known methods for doing this are compared in terms of their suitability for being incorporated into the proposed BMS framework for ANNs. However, it is acknowledged that it can be particularly difficult to accurately estimate the evidence of ANN models. Therefore, the proposed BMS approach for ANNs incorporates a further check of the evidence results by inspecting the marginal posterior distributions of the hidden-to-output layer weights, which unambiguously indicate any redundancies in the hidden layer nodes. The fact that this check is available is one of the greatest advantages of the proposed approach over conventional model selection methods, which do not provide such a test and instead rely on the modeler's subjective choice of selection criterion. The advantages of a total Bayesian approach to ANN development, including training and model selection, are demonstrated on two synthetic and one real world water resources case study.

  17. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    Science.gov (United States)

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  18. The Hierarchical Sparse Selection Model of Visual Crowding

    Directory of Open Access Journals (Sweden)

    Wesley eChaney

    2014-09-01

    Full Text Available Because the environment is cluttered, objects rarely appear in isolation. The visual system must therefore attentionally select behaviorally relevant objects from among many irrelevant ones. A limit on our ability to select individual objects is revealed by the phenomenon of visual crowding: an object seen in the periphery, easily recognized in isolation, can become impossible to identify when surrounded by other, similar objects. The neural basis of crowding is hotly debated: while prevailing theories hold that crowded information is irrecoverable – destroyed due to over-integration in early-stage visual processing – recent evidence demonstrates otherwise. Crowding can occur between high-level, configural object representations, and crowded objects can contribute with high precision to judgments about the gist of a group of objects, even when they are individually unrecognizable. While existing models can account for the basic diagnostic criteria of crowding (e.g. specific critical spacing, spatial anisotropies, and temporal tuning, no present model explains how crowding can operate simultaneously at multiple levels in the visual processing hierarchy, including at the level of whole objects. Here, we present a new model of visual crowding— the hierarchical sparse selection (HSS model, which accounts for object-level crowding, as well as a number of puzzling findings in the recent literature. Counter to existing theories, we posit that crowding occurs not due to degraded visual representations in the brain, but due to impoverished sampling of visual representations for the sake of perception. The HSS model unifies findings from a disparate array of visual crowding studies and makes testable predictions about how information in crowded scenes can be accessed.

  19. The hierarchical sparse selection model of visual crowding.

    Science.gov (United States)

    Chaney, Wesley; Fischer, Jason; Whitney, David

    2014-01-01

    Because the environment is cluttered, objects rarely appear in isolation. The visual system must therefore attentionally select behaviorally relevant objects from among many irrelevant ones. A limit on our ability to select individual objects is revealed by the phenomenon of visual crowding: an object seen in the periphery, easily recognized in isolation, can become impossible to identify when surrounded by other, similar objects. The neural basis of crowding is hotly debated: while prevailing theories hold that crowded information is irrecoverable - destroyed due to over-integration in early stage visual processing - recent evidence demonstrates otherwise. Crowding can occur between high-level, configural object representations, and crowded objects can contribute with high precision to judgments about the "gist" of a group of objects, even when they are individually unrecognizable. While existing models can account for the basic diagnostic criteria of crowding (e.g., specific critical spacing, spatial anisotropies, and temporal tuning), no present model explains how crowding can operate simultaneously at multiple levels in the visual processing hierarchy, including at the level of whole objects. Here, we present a new model of visual crowding-the hierarchical sparse selection (HSS) model, which accounts for object-level crowding, as well as a number of puzzling findings in the recent literature. Counter to existing theories, we posit that crowding occurs not due to degraded visual representations in the brain, but due to impoverished sampling of visual representations for the sake of perception. The HSS model unifies findings from a disparate array of visual crowding studies and makes testable predictions about how information in crowded scenes can be accessed.

  20. Finite element model selection using Particle Swarm Optimization

    CERN Document Server

    Mthembu, Linda; Friswell, Michael I; Adhikari, Sondipon

    2009-01-01

    This paper proposes the application of particle swarm optimization (PSO) to the problem of finite element model (FEM) selection. This problem arises when a choice of the best model for a system has to be made from set of competing models, each developed a priori from engineering judgment. PSO is a population-based stochastic search algorithm inspired by the behaviour of biological entities in nature when they are foraging for resources. Each potentially correct model is represented as a particle that exhibits both individualistic and group behaviour. Each particle moves within the model search space looking for the best solution by updating the parameters values that define it. The most important step in the particle swarm algorithm is the method of representing models which should take into account the number, location and variables of parameters to be updated. One example structural system is used to show the applicability of PSO in finding an optimal FEM. An optimal model is defined as the model that has t...