WorldWideScience

Sample records for model-selection process comparing

  1. A Comparative Investigation of the Combined Effects of Pre-Processing, Wavelength Selection, and Regression Methods on Near-Infrared Calibration Model Performance.

    Science.gov (United States)

    Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N

    2017-07-01

    Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant

  2. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  3. IT vendor selection model by using structural equation model & analytical hierarchy process

    Science.gov (United States)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  4. Comparative analysis of business rules and business process modeling languages

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2013-03-01

    Full Text Available During developing an information system is important to create clear models and choose suitable modeling languages. The article analyzes the SRML, SBVR, PRR, SWRL, OCL rules specifying language and UML, DFD, CPN, EPC and IDEF3 BPMN business process modeling language. The article presents business rules and business process modeling languages theoretical comparison. The article according to selected modeling aspects of the comparison between different business process modeling languages ​​and business rules representation languages sets. Also, it is selected the best fit of language set for three layer framework for business rule based software modeling.

  5. On a Robust MaxEnt Process Regression Model with Sample-Selection

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2018-04-01

    Full Text Available In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance.

  6. Comparing the staffing models of outsourcing in selected companies

    OpenAIRE

    Chaloupková, Věra

    2010-01-01

    This thesis deals with problems of takeover of employees in outsourcing. The capital purpose is to compare the staffing model of outsourcing in selected companies. To compare in selected companies I chose multi-criteria analysis. This thesis is dividend into six chapters. The first charter is devoted to the theoretical part. In this charter describes the basic concepts as outsourcing, personal aspects, phase of the outsourcing projects, communications and culture. The rest of thesis is devote...

  7. Variable Selection for Nonparametric Gaussian Process Priors: Models and Computational Strategies.

    Science.gov (United States)

    Savitsky, Terrance; Vannucci, Marina; Sha, Naijun

    2011-02-01

    This paper presents a unified treatment of Gaussian process models that extends to data from the exponential dispersion family and to survival data. Our specific interest is in the analysis of data sets with predictors that have an a priori unknown form of possibly nonlinear associations to the response. The modeling approach we describe incorporates Gaussian processes in a generalized linear model framework to obtain a class of nonparametric regression models where the covariance matrix depends on the predictors. We consider, in particular, continuous, categorical and count responses. We also look into models that account for survival outcomes. We explore alternative covariance formulations for the Gaussian process prior and demonstrate the flexibility of the construction. Next, we focus on the important problem of selecting variables from the set of possible predictors and describe a general framework that employs mixture priors. We compare alternative MCMC strategies for posterior inference and achieve a computationally efficient and practical approach. We demonstrate performances on simulated and benchmark data sets.

  8. The application of feature selection to the development of Gaussian process models for percutaneous absorption.

    Science.gov (United States)

    Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P

    2010-06-01

    The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it

  9. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  10. Mental health courts and their selection processes: modeling variation for consistency.

    Science.gov (United States)

    Wolff, Nancy; Fabrikant, Nicole; Belenko, Steven

    2011-10-01

    Admission into mental health courts is based on a complicated and often variable decision-making process that involves multiple parties representing different expertise and interests. To the extent that eligibility criteria of mental health courts are more suggestive than deterministic, selection bias can be expected. Very little research has focused on the selection processes underpinning problem-solving courts even though such processes may dominate the performance of these interventions. This article describes a qualitative study designed to deconstruct the selection and admission processes of mental health courts. In this article, we describe a multi-stage, complex process for screening and admitting clients into mental health courts. The selection filtering model that is described has three eligibility screening stages: initial, assessment, and evaluation. The results of this study suggest that clients selected by mental health courts are shaped by the formal and informal selection criteria, as well as by the local treatment system.

  11. Solvent selection methodology for pharmaceutical processes: Solvent swap

    DEFF Research Database (Denmark)

    Papadakis, Emmanouil; Kumar Tula, Anjan; Gani, Rafiqul

    2016-01-01

    A method for the selection of appropriate solvents for the solvent swap task in pharmaceutical processes has been developed. This solvent swap method is based on the solvent selection method of Gani et al. (2006) and considers additional selection criteria such as boiling point difference...... in pharmaceutical processes as well as new solvent swap alternatives. The method takes into account process considerations such as batch distillation and crystallization to achieve the swap task. Rigorous model based simulations of the swap operation are performed to evaluate and compare the performance...

  12. Fault diagnosis and comparing risk for the steel coil manufacturing process using statistical models for binary data

    International Nuclear Information System (INIS)

    Debón, A.; Carlos Garcia-Díaz, J.

    2012-01-01

    Advanced statistical models can help industry to design more economical and rational investment plans. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing. Increasingly stringent quality requirements in the automotive industry also require ongoing efforts in process control to make processes more robust. Robust methods for estimating the quality of galvanized steel coils are an important tool for the comprehensive monitoring of the performance of the manufacturing process. This study applies different statistical regression models: generalized linear models, generalized additive models and classification trees to estimate the quality of galvanized steel coils on the basis of short time histories. The data, consisting of 48 galvanized steel coils, was divided into sets of conforming and nonconforming coils. Five variables were selected for monitoring the process: steel strip velocity and four bath temperatures. The present paper reports a comparative evaluation of statistical models for binary data using Receiver Operating Characteristic (ROC) curves. A ROC curve is a graph or a technique for visualizing, organizing and selecting classifiers based on their performance. The purpose of this paper is to examine their use in research to obtain the best model to predict defective steel coil probability. In relation to the work of other authors who only propose goodness of fit statistics, we should highlight one distinctive feature of the methodology presented here, which is the possibility of comparing the different models with ROC graphs which are based on model classification performance. Finally, the results are validated by bootstrap procedures.

  13. Unraveling the sub-processes of selective attention: insights from dynamic modeling and continuous behavior.

    Science.gov (United States)

    Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan

    2015-11-01

    Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition.

  14. Modeling and Experimental Validation of the Electron Beam Selective Melting Process

    Directory of Open Access Journals (Sweden)

    Wentao Yan

    2017-10-01

    Full Text Available Electron beam selective melting (EBSM is a promising additive manufacturing (AM technology. The EBSM process consists of three major procedures: ① spreading a powder layer, ② preheating to slightly sinter the powder, and ③ selectively melting the powder bed. The highly transient multi-physics phenomena involved in these procedures pose a significant challenge for in situ experimental observation and measurement. To advance the understanding of the physical mechanisms in each procedure, we leverage high-fidelity modeling and post-process experiments. The models resemble the actual fabrication procedures, including ① a powder-spreading model using the discrete element method (DEM, ② a phase field (PF model of powder sintering (solid-state sintering, and ③ a powder-melting (liquid-state sintering model using the finite volume method (FVM. Comprehensive insights into all the major procedures are provided, which have rarely been reported. Preliminary simulation results (including powder particle packing within the powder bed, sintering neck formation between particles, and single-track defects agree qualitatively with experiments, demonstrating the ability to understand the mechanisms and to guide the design and optimization of the experimental setup and manufacturing process.

  15. An integrated model for supplier selection process

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In today's highly competitive manufacturing environment, the supplier selection process becomes one of crucial activities in supply chain management. In order to select the best supplier(s) it is not only necessary to continuously tracking and benchmarking performance of suppliers but also to make a tradeoff between tangible and intangible factors some of which may conflict. In this paper an integration of case-based reasoning (CBR), analytical network process (ANP) and linear programming (LP) is proposed to solve the supplier selection problem.

  16. Multiattribute Supplier Selection Using Fuzzy Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Serhat Aydin

    2010-11-01

    Full Text Available Supplier selection is a multiattribute decision making (MADM problem which contains both qualitative and quantitative factors. Supplier selection has vital importance for most companies. The aim of this paper is to provide an AHP based analytical tool for decision support enabling an effective multicriteria supplier selection process in an air conditioner seller firm under fuzziness. In this article, the Analytic Hierarchy Process (AHP under fuzziness is employed for its permissiveness to use an evaluation scale including linguistic expressions, crisp numerical values, fuzzy numbers and range numerical values. This scale provides a more flexible evaluation compared with the other fuzzy AHP methods. In this study, the modified AHP was used in supplier selection in an air conditioner firm. Three experts evaluated the suppliers according to the proposed model and the most appropriate supplier was selected. The proposed model enables decision makers select the best supplier among supplier firms effectively. We confirm that the modified fuzzy AHP is appropriate for group decision making in supplier selection problems.

  17. Detecting selection needs comparative data

    DEFF Research Database (Denmark)

    Nielsen, Rasmus; Hubisz, Melissa J.

    2005-01-01

    Positive selection at the molecular level is usually indicated by an increase in the ratio of non-synonymous to synonymous substitutions (dN/dS) in comparative data. However, Plotkin et al. 1 describe a new method for detecting positive selection based on a single nucleotide sequence. We show here...... that this method is particularly sensitive to assumptions regarding the underlying mutational processes and does not provide a reliable way to identify positive selection....

  18. Probabilistic wind power forecasting with online model selection and warped gaussian process

    International Nuclear Information System (INIS)

    Kou, Peng; Liang, Deliang; Gao, Feng; Gao, Lin

    2014-01-01

    Highlights: • A new online ensemble model for the probabilistic wind power forecasting. • Quantifying the non-Gaussian uncertainties in wind power. • Online model selection that tracks the time-varying characteristic of wind generation. • Dynamically altering the input features. • Recursive update of base models. - Abstract: Based on the online model selection and the warped Gaussian process (WGP), this paper presents an ensemble model for the probabilistic wind power forecasting. This model provides the non-Gaussian predictive distributions, which quantify the non-Gaussian uncertainties associated with wind power. In order to follow the time-varying characteristics of wind generation, multiple time dependent base forecasting models and an online model selection strategy are established, thus adaptively selecting the most probable base model for each prediction. WGP is employed as the base model, which handles the non-Gaussian uncertainties in wind power series. Furthermore, a regime switch strategy is designed to modify the input feature set dynamically, thereby enhancing the adaptiveness of the model. In an online learning framework, the base models should also be time adaptive. To achieve this, a recursive algorithm is introduced, thus permitting the online updating of WGP base models. The proposed model has been tested on the actual data collected from both single and aggregated wind farms

  19. Social-cognitive processes in preschoolers' selective trust: three cultures compared.

    Science.gov (United States)

    Lucas, Amanda J; Lewis, Charlie; Pala, F Cansu; Wong, Katie; Berridge, Damon

    2013-03-01

    Research on preschoolers' selective learning has mostly been conducted in English-speaking countries. We compared the performance of Turkish preschoolers (who are exposed to a language with evidential markers), Chinese preschoolers (known to be advanced in executive skills), and English preschoolers on an extended selective trust task (N = 144). We also measured children's executive function skills and their ability to attribute false belief. Overall we found a Turkish (rather than a Chinese) advantage in selective trust and a relationship between selective trust and false belief (rather than executive function). This is the 1st evidence that exposure to a language that obliges speakers to state the sources of their knowledge may sensitize preschoolers to informant reliability. It is also the first demonstration of an association between false belief and selective trust. Together these findings suggest that effective selective learning may progress alongside children's developing capacity to assess the knowledge of others.

  20. From information processing to decisions: Formalizing and comparing psychologically plausible choice models.

    Science.gov (United States)

    Heck, Daniel W; Hilbig, Benjamin E; Moshagen, Morten

    2017-08-01

    Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Bayesian model selection validates a biokinetic model for zirconium processing in humans

    Science.gov (United States)

    2012-01-01

    Background In radiation protection, biokinetic models for zirconium processing are of crucial importance in dose estimation and further risk analysis for humans exposed to this radioactive substance. They provide limiting values of detrimental effects and build the basis for applications in internal dosimetry, the prediction for radioactive zirconium retention in various organs as well as retrospective dosimetry. Multi-compartmental models are the tool of choice for simulating the processing of zirconium. Although easily interpretable, determining the exact compartment structure and interaction mechanisms is generally daunting. In the context of observing the dynamics of multiple compartments, Bayesian methods provide efficient tools for model inference and selection. Results We are the first to apply a Markov chain Monte Carlo approach to compute Bayes factors for the evaluation of two competing models for zirconium processing in the human body after ingestion. Based on in vivo measurements of human plasma and urine levels we were able to show that a recently published model is superior to the standard model of the International Commission on Radiological Protection. The Bayes factors were estimated by means of the numerically stable thermodynamic integration in combination with a recently developed copula-based Metropolis-Hastings sampler. Conclusions In contrast to the standard model the novel model predicts lower accretion of zirconium in bones. This results in lower levels of noxious doses for exposed individuals. Moreover, the Bayesian approach allows for retrospective dose assessment, including credible intervals for the initially ingested zirconium, in a significantly more reliable fashion than previously possible. All methods presented here are readily applicable to many modeling tasks in systems biology. PMID:22863152

  2. Model of the best-of-N nest-site selection process in honeybees

    Science.gov (United States)

    Reina, Andreagiovanni; Marshall, James A. R.; Trianni, Vito; Bose, Thomas

    2017-05-01

    The ability of a honeybee swarm to select the best nest site plays a fundamental role in determining the future colony's fitness. To date, the nest-site selection process has mostly been modeled and theoretically analyzed for the case of binary decisions. However, when the number of alternative nests is larger than two, the decision-process dynamics qualitatively change. In this work, we extend previous analyses of a value-sensitive decision-making mechanism to a decision process among N nests. First, we present the decision-making dynamics in the symmetric case of N equal-quality nests. Then, we generalize our findings to a best-of-N decision scenario with one superior nest and N -1 inferior nests, previously studied empirically in bees and ants. Whereas previous binary models highlighted the crucial role of inhibitory stop-signaling, the key parameter in our new analysis is the relative time invested by swarm members in individual discovery and in signaling behaviors. Our new analysis reveals conflicting pressures on this ratio in symmetric and best-of-N decisions, which could be solved through a time-dependent signaling strategy. Additionally, our analysis suggests how ecological factors determining the density of suitable nest sites may have led to selective pressures for an optimal stable signaling ratio.

  3. Consumer Decision Process in Restaurant Selection: An Application of the Stylized EKB Model

    Directory of Open Access Journals (Sweden)

    Eugenia Wickens

    2016-12-01

    Full Text Available Purpose – The aim of this paper is to propose a framework based on empirical work for understanding the consumer decision processes involved in the selection of a restaurant for leisure meals. Design/Methodology/Approach – An interpretive approach is taken in order to understand the intricacies of the process and the various stages in the process. Six focus group interviews with consumers of various ages and occupations in the South East of the United Kingdom were conducted. Findings and implications – The stylized EKB model of the consumer decision process (Tuan-Pham & Higgins, 2005 was used as a framework for developing different stages of the process. Two distinct parts of the process were identified. Occasion was found to be critical to the stage of problem recognition. In terms of evaluation of alternatives and, in particular, sensitivity to evaluative content, the research indicates that the regulatory focus theory of Tuan-Pham and Higgins (2005 applies to the decision of selecting a restaurant. Limitations – It is acknowledged that this exploratory study is based on a small sample in a single geographical area. Originality – The paper is the first application of the stylized EKB model, which takes into account the motivational dimensions of consumer decision making, missing in other models. It concludes that it may have broader applications to other research contexts.

  4. It Takes Three: Selection, Influence, and De-Selection Processes of Depression in Adolescent Friendship Networks

    Science.gov (United States)

    Van Zalk, Maarten Herman Walter; Kerr, Margaret; Branje, Susan J. T.; Stattin, Hakan; Meeus, Wim H. J.

    2010-01-01

    The authors of this study tested a selection-influence-de-selection model of depression. This model explains friendship influence processes (i.e., friends' depressive symptoms increase adolescents' depressive symptoms) while controlling for two processes: friendship selection (i.e., selection of friends with similar levels of depressive symptoms)…

  5. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  6. Effects of binge drinking and hangover on response selection sub-processes-a study using EEG and drift diffusion modeling.

    Science.gov (United States)

    Stock, Ann-Kathrin; Hoffmann, Sven; Beste, Christian

    2017-09-01

    Effects of binge drinking on cognitive control and response selection are increasingly recognized in research on alcohol (ethanol) effects. Yet, little is known about how those processes are modulated by hangover effects. Given that acute intoxication and hangover seem to be characterized by partly divergent effects and mechanisms, further research on this topic is needed. In the current study, we hence investigated this with a special focus on potentially differential effects of alcohol intoxication and subsequent hangover on sub-processes involved in the decision to select a response. We do so combining drift diffusion modeling of behavioral data with neurophysiological (EEG) data. Opposed to common sense, the results do not show an impairment of all assessed measures. Instead, they show specific effects of high dose alcohol intoxication and hangover on selective drift diffusion model and EEG parameters (as compared to a sober state). While the acute intoxication induced by binge-drinking decreased the drift rate, it was increased by the subsequent hangover, indicating more efficient information accumulation during hangover. Further, the non-decisional processes of information encoding decreased with intoxication, but not during hangover. These effects were reflected in modulations of the N2, P1 and N1 event-related potentials, which reflect conflict monitoring, perceptual gating and attentional selection processes, respectively. As regards the functional neuroanatomical architecture, the anterior cingulate cortex (ACC) as well as occipital networks seem to be modulated. Even though alcohol is known to have broad neurobiological effects, its effects on cognitive processes are rather specific. © 2016 Society for the Study of Addiction.

  7. Vibration and acoustic frequency spectra for industrial process modeling using selective fusion multi-condition samples and multi-source features

    Science.gov (United States)

    Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen

    2018-01-01

    Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.

  8. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent. The theoret......Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent...

  9. ERP Software Selection Model using Analytic Network Process

    OpenAIRE

    Lesmana , Andre Surya; Astanti, Ririn Diar; Ai, The Jin

    2014-01-01

    During the implementation of Enterprise Resource Planning (ERP) in any company, one of the most important issues is the selection of ERP software that can satisfy the needs and objectives of the company. This issue is crucial since it may affect the duration of ERP implementation and the costs incurred for the ERP implementation. This research tries to construct a model of the selection of ERP software that are beneficial to the company in order to carry out the selection of the right ERP sof...

  10. Modeling HIV-1 drug resistance as episodic directional selection.

    Science.gov (United States)

    Murrell, Ben; de Oliveira, Tulio; Seebregts, Chris; Kosakovsky Pond, Sergei L; Scheffler, Konrad

    2012-01-01

    The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS) which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  11. Modeling HIV-1 drug resistance as episodic directional selection.

    Directory of Open Access Journals (Sweden)

    Ben Murrell

    Full Text Available The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  12. Comparative studies on constitutive models for cohesive interface cracks of quasi-brittle materials

    International Nuclear Information System (INIS)

    Shen Xinpu; Shen Guoxiao; Zhou Lin

    2005-01-01

    In this paper, Concerning on the modelling of quasi-brittle fracture process zone at interface crack of quasi-brittle materials and structures, typical constitutive models of interface cracks were compared. Numerical calculations of the constitutive behaviours of selected models were carried out at local level. Aiming at the simulation of quasi-brittle fracture of concrete-like materials and structures, the emphases of the qualitative comparisons of selected cohesive models are focused on: (1) the fundamental mode I and mode II behaviours of selected models; (2) dilatancy properties of the selected models under mixed mode fracture loading conditions. (authors)

  13. Building v/s Exploring Models: Comparing Learning of Evolutionary Processes through Agent-based Modeling

    Science.gov (United States)

    Wagh, Aditi

    Two strands of work motivate the three studies in this dissertation. Evolutionary change can be viewed as a computational complex system in which a small set of rules operating at the individual level result in different population level outcomes under different conditions. Extensive research has documented students' difficulties with learning about evolutionary change (Rosengren et al., 2012), particularly in terms of levels slippage (Wilensky & Resnick, 1999). Second, though building and using computational models is becoming increasingly common in K-12 science education, we know little about how these two modalities compare. This dissertation adopts agent-based modeling as a representational system to compare these modalities in the conceptual context of micro-evolutionary processes. Drawing on interviews, Study 1 examines middle-school students' productive ways of reasoning about micro-evolutionary processes to find that the specific framing of traits plays a key role in whether slippage explanations are cued. Study 2, which was conducted in 2 schools with about 150 students, forms the crux of the dissertation. It compares learning processes and outcomes when students build their own models or explore a pre-built model. Analysis of Camtasia videos of student pairs reveals that builders' and explorers' ways of accessing rules, and sense-making of observed trends are of a different character. Builders notice rules through available blocks-based primitives, often bypassing their enactment while explorers attend to rules primarily through the enactment. Moreover, builders' sense-making of observed trends is more rule-driven while explorers' is more enactment-driven. Pre and posttests reveal that builders manifest a greater facility with accessing rules, providing explanations manifesting targeted assembly. Explorers use rules to construct explanations manifesting non-targeted assembly. Interviews reveal varying degrees of shifts away from slippage in both

  14. Physics-based simulation modeling and optimization of microstructural changes induced by machining and selective laser melting processes in titanium and nickel based alloys

    Science.gov (United States)

    Arisoy, Yigit Muzaffer

    Manufacturing processes may significantly affect the quality of resultant surfaces and structural integrity of the metal end products. Controlling manufacturing process induced changes to the product's surface integrity may improve the fatigue life and overall reliability of the end product. The goal of this study is to model the phenomena that result in microstructural alterations and improve the surface integrity of the manufactured parts by utilizing physics-based process simulations and other computational methods. Two different (both conventional and advanced) manufacturing processes; i.e. machining of Titanium and Nickel-based alloys and selective laser melting of Nickel-based powder alloys are studied. 3D Finite Element (FE) process simulations are developed and experimental data that validates these process simulation models are generated to compare against predictions. Computational process modeling and optimization have been performed for machining induced microstructure that includes; i) predicting recrystallization and grain size using FE simulations and the Johnson-Mehl-Avrami-Kolmogorov (JMAK) model, ii) predicting microhardness using non-linear regression models and the Random Forests method, and iii) multi-objective machining optimization for minimizing microstructural changes. Experimental analysis and computational process modeling of selective laser melting have been also conducted including; i) microstructural analysis of grain sizes and growth directions using SEM imaging and machine learning algorithms, ii) analysis of thermal imaging for spattering, heating/cooling rates and meltpool size, iii) predicting thermal field, meltpool size, and growth directions via thermal gradients using 3D FE simulations, iv) predicting localized solidification using the Phase Field method. These computational process models and predictive models, once utilized by industry to optimize process parameters, have the ultimate potential to improve performance of

  15. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.

    2012-03-01

    Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.

  16. Traditional and robust vector selection methods for use with similarity based models

    International Nuclear Information System (INIS)

    Hines, J. W.; Garvey, D. R.

    2006-01-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  17. A Selection Approach for Optimized Problem-Solving Process by Grey Relational Utility Model and Multicriteria Decision Analysis

    Directory of Open Access Journals (Sweden)

    Chih-Kun Ke

    2012-01-01

    Full Text Available In business enterprises, especially the manufacturing industry, various problem situations may occur during the production process. A situation denotes an evaluation point to determine the status of a production process. A problem may occur if there is a discrepancy between the actual situation and the desired one. Thus, a problem-solving process is often initiated to achieve the desired situation. In the process, how to determine an action need to be taken to resolve the situation becomes an important issue. Therefore, this work uses a selection approach for optimized problem-solving process to assist workers in taking a reasonable action. A grey relational utility model and a multicriteria decision analysis are used to determine the optimal selection order of candidate actions. The selection order is presented to the worker as an adaptive recommended solution. The worker chooses a reasonable problem-solving action based on the selection order. This work uses a high-tech company’s knowledge base log as the analysis data. Experimental results demonstrate that the proposed selection approach is effective.

  18. Comparing single- and dual-process models of memory development.

    Science.gov (United States)

    Hayes, Brett K; Dunn, John C; Joubert, Amy; Taylor, Robert

    2017-11-01

    This experiment examined single-process and dual-process accounts of the development of visual recognition memory. The participants, 6-7-year-olds, 9-10-year-olds and adults, were presented with a list of pictures which they encoded under shallow or deep conditions. They then made recognition and confidence judgments about a list containing old and new items. We replicated the main trends reported by Ghetti and Angelini () in that recognition hit rates increased from 6 to 9 years of age, with larger age changes following deep than shallow encoding. Formal versions of the dual-process high threshold signal detection model and several single-process models (equal variance signal detection, unequal variance signal detection, mixture signal detection) were fit to the developmental data. The unequal variance and mixture signal detection models gave a better account of the data than either of the other models. A state-trace analysis found evidence for only one underlying memory process across the age range tested. These results suggest that single-process memory models based on memory strength are a viable alternative to dual-process models for explaining memory development. © 2016 John Wiley & Sons Ltd.

  19. Category-selective attention modulates unconscious processes in the middle occipital gyrus.

    Science.gov (United States)

    Tu, Shen; Qiu, Jiang; Martens, Ulla; Zhang, Qinglin

    2013-06-01

    Many studies have revealed the top-down modulation (spatial attention, attentional load, etc.) on unconscious processing. However, there is little research about how category-selective attention could modulate the unconscious processing. In the present study, using functional magnetic resonance imaging (fMRI), the results showed that category-selective attention modulated unconscious face/tool processing in the middle occipital gyrus (MOG). Interestingly, MOG effects were of opposed direction for face and tool processes. During unconscious face processing, activation in MOG decreased under the face-selective attention compared with tool-selective attention. This result was in line with the predictive coding theory. During unconscious tool processing, however, activation in MOG increased under the tool-selective attention compared with face-selective attention. The different effects might be ascribed to an interaction between top-down category-selective processes and bottom-up processes in the partial awareness level as proposed by Kouider, De Gardelle, Sackur, and Dupoux (2010). Specifically, we suppose an "excessive activation" hypothesis. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Modeling intermediate product selection under production and storage capacity limitations in food processing

    DEFF Research Database (Denmark)

    Kilic, Onur Alper; Akkerman, Renzo; Grunow, Martin

    2009-01-01

    In the food industry products are usually characterized by their recipes, which are specified by various quality attributes. For end products, this is given by customer requirements, but for intermediate products, the recipes can be chosen in such a way that raw material procurement costs and pro...... with production and inventory planning, thereby considering the production and storage capacity limitations. The resulting model can be used to solve an important practical problem typical for many food processing industries.......In the food industry products are usually characterized by their recipes, which are specified by various quality attributes. For end products, this is given by customer requirements, but for intermediate products, the recipes can be chosen in such a way that raw material procurement costs...... and processing costs are minimized. However, this product selection process is bound by production and storage capacity limitations, such as the number and size of storage tanks or silos. In this paper, we present a mathematical programming approach that combines decision making on product selection...

  1. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  2. A finite volume alternate direction implicit approach to modeling selective laser melting

    DEFF Research Database (Denmark)

    Hattel, Jesper Henri; Mohanty, Sankhya

    2013-01-01

    Over the last decade, several studies have attempted to develop thermal models for analyzing the selective laser melting process with a vision to predict thermal stresses, microstructures and resulting mechanical properties of manufactured products. While a holistic model addressing all involved...... to accurately simulate the process, are constrained by either the size or scale of the model domain. A second challenging aspect involves the inclusion of non-linear material behavior into the 3D implicit FE models. An alternating direction implicit (ADI) method based on a finite volume (FV) formulation...... is proposed for modeling single-layer and few-layers selective laser melting processes. The ADI technique is implemented and applied for two cases involving constant material properties and non-linear material behavior. The ADI FV method consume less time while having comparable accuracy with respect to 3D...

  3. Fusion strategies for selecting multiple tuning parameters for multivariate calibration and other penalty based processes: A model updating application for pharmaceutical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Tencate, Alister J. [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); Kalivas, John H., E-mail: kalijohn@isu.edu [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); White, Alexander J. [Department of Physics and Optical Engineering, Rose-Hulman Institute of Technology, Terre Huate, IN 47803 (United States)

    2016-05-19

    New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of

  4. Fusion strategies for selecting multiple tuning parameters for multivariate calibration and other penalty based processes: A model updating application for pharmaceutical analysis

    International Nuclear Information System (INIS)

    Tencate, Alister J.; Kalivas, John H.; White, Alexander J.

    2016-01-01

    New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of

  5. A comparative study of fuzzy target selection methods in direct marketing

    NARCIS (Netherlands)

    Costa Sousa, da J.M.; Kaymak, U.; Madeira, S.

    2002-01-01

    Target selection in direct marketing is an important data mining problem for which fuzzy modeling can be used. The paper compares several fuzzy modeling techniques applied to target selection based on recency, frequency and monetary value measures. The comparison uses cross validation applied to

  6. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  7. Saccharomyces cerevisiae as a model organism: a comparative study.

    Directory of Open Access Journals (Sweden)

    Hiren Karathia

    Full Text Available BACKGROUND: Model organisms are used for research because they provide a framework on which to develop and optimize methods that facilitate and standardize analysis. Such organisms should be representative of the living beings for which they are to serve as proxy. However, in practice, a model organism is often selected ad hoc, and without considering its representativeness, because a systematic and rational method to include this consideration in the selection process is still lacking. METHODOLOGY/PRINCIPAL FINDINGS: In this work we propose such a method and apply it in a pilot study of strengths and limitations of Saccharomyces cerevisiae as a model organism. The method relies on the functional classification of proteins into different biological pathways and processes and on full proteome comparisons between the putative model organism and other organisms for which we would like to extrapolate results. Here we compare S. cerevisiae to 704 other organisms from various phyla. For each organism, our results identify the pathways and processes for which S. cerevisiae is predicted to be a good model to extrapolate from. We find that animals in general and Homo sapiens in particular are some of the non-fungal organisms for which S. cerevisiae is likely to be a good model in which to study a significant fraction of common biological processes. We validate our approach by correctly predicting which organisms are phenotypically more distant from S. cerevisiae with respect to several different biological processes. CONCLUSIONS/SIGNIFICANCE: The method we propose could be used to choose appropriate substitute model organisms for the study of biological processes in other species that are harder to study. For example, one could identify appropriate models to study either pathologies in humans or specific biological processes in species with a long development time, such as plants.

  8. Experiments on the Model Testing of the 2nd Phase of Die Casting Process Compared with the Results of Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Dańko R.

    2015-12-01

    Full Text Available Experiments of filling the model moulds cavity of various inner shapes inserted in rectangular cavity of the casting die (dimensions: 280 mm (height × 190 mm (width × 10 mm (depth by applying model liquids of various density and viscosity are presented in the paper. Influence of die venting as well as inlet system area and inlet velocity on the volumetric rate of filling of the model liquid - achieved by means of filming the process in the system of a cold-chamber casting die was tested. Experiments compared with the results of simulation performed by means of the calculation module Novacast (Novaflow&Solid for the selected various casting conditions - are also presented in the paper.

  9. Optimal processing pathway selection for microalgae-based biorefinery under uncertainty

    DEFF Research Database (Denmark)

    Rizwan, Muhammad; Zaman, Muhammad; Lee, Jay H.

    2015-01-01

    We propose a systematic framework for the selection of optimal processing pathways for a microalgaebased biorefinery under techno-economic uncertainty. The proposed framework promotes robust decision making by taking into account the uncertainties that arise due to inconsistencies among...... and shortage in the available technical information. A stochastic mixed integer nonlinear programming (sMINLP) problem is formulated for determining the optimal biorefinery configurations based on a superstructure model where parameter uncertainties are modeled and included as sampled scenarios. The solution...... the accounting of uncertainty are compared with respect to different objectives. (C) 2015 Elsevier Ltd. All rights reserved....

  10. UML in business process modeling

    Directory of Open Access Journals (Sweden)

    Bartosz Marcinkowski

    2013-03-01

    Full Text Available Selection and proper application of business process modeling methods and techniques have a significant impact on organizational improvement capabilities as well as proper understanding of functionality of information systems that shall support activity of the organization. A number of business process modeling notations were popularized in practice in recent decades. Most significant of the notations include Business Process Modeling Notation (OMG BPMN and several Unified Modeling Language (OMG UML extensions. In this paper, the assessment whether one of the most flexible and strictly standardized contemporary business process modeling notations, i.e. Rational UML Profile for Business Modeling, enable business analysts to prepare business models that are all-embracing and understandable by all the stakeholders. After the introduction, methodology of research is discussed. Section 2 presents selected case study results. The paper is concluded with a summary.

  11. An evolutionary algorithm for model selection

    Energy Technology Data Exchange (ETDEWEB)

    Bicker, Karl [CERN, Geneva (Switzerland); Chung, Suh-Urk; Friedrich, Jan; Grube, Boris; Haas, Florian; Ketzer, Bernhard; Neubert, Sebastian; Paul, Stephan; Ryabchikov, Dimitry [Technische Univ. Muenchen (Germany)

    2013-07-01

    When performing partial-wave analyses of multi-body final states, the choice of the fit model, i.e. the set of waves to be used in the fit, can significantly alter the results of the partial wave fit. Traditionally, the models were chosen based on physical arguments and by observing the changes in log-likelihood of the fits. To reduce possible bias in the model selection process, an evolutionary algorithm was developed based on a Bayesian goodness-of-fit criterion which takes into account the model complexity. Starting from systematically constructed pools of waves which contain significantly more waves than the typical fit model, the algorithm yields a model with an optimal log-likelihood and with a number of partial waves which is appropriate for the number of events in the data. Partial waves with small contributions to the total intensity are penalized and likely to be dropped during the selection process, as are models were excessive correlations between single waves occur. Due to the automated nature of the model selection, a much larger part of the model space can be explored than would be possible in a manual selection. In addition the method allows to assess the dependence of the fit result on the fit model which is an important contribution to the systematic uncertainty.

  12. Predictive modeling, simulation, and optimization of laser processing techniques: UV nanosecond-pulsed laser micromachining of polymers and selective laser melting of powder metals

    Science.gov (United States)

    Criales Escobar, Luis Ernesto

    One of the most frequently evolving areas of research is the utilization of lasers for micro-manufacturing and additive manufacturing purposes. The use of laser beam as a tool for manufacturing arises from the need for flexible and rapid manufacturing at a low-to-mid cost. Laser micro-machining provides an advantage over mechanical micro-machining due to the faster production times of large batch sizes and the high costs associated with specific tools. Laser based additive manufacturing enables processing of powder metals for direct and rapid fabrication of products. Therefore, laser processing can be viewed as a fast, flexible, and cost-effective approach compared to traditional manufacturing processes. Two types of laser processing techniques are studied: laser ablation of polymers for micro-channel fabrication and selective laser melting of metal powders. Initially, a feasibility study for laser-based micro-channel fabrication of poly(dimethylsiloxane) (PDMS) via experimentation is presented. In particular, the effectiveness of utilizing a nanosecond-pulsed laser as the energy source for laser ablation is studied. The results are analyzed statistically and a relationship between process parameters and micro-channel dimensions is established. Additionally, a process model is introduced for predicting channel depth. Model outputs are compared and analyzed to experimental results. The second part of this research focuses on a physics-based FEM approach for predicting the temperature profile and melt pool geometry in selective laser melting (SLM) of metal powders. Temperature profiles are calculated for a moving laser heat source to understand the temperature rise due to heating during SLM. Based on the predicted temperature distributions, melt pool geometry, i.e. the locations at which melting of the powder material occurs, is determined. Simulation results are compared against data obtained from experimental Inconel 625 test coupons fabricated at the National

  13. Process cost and facility considerations in the selection of primary cell culture clarification technology.

    Science.gov (United States)

    Felo, Michael; Christensen, Brandon; Higgins, John

    2013-01-01

    The bioreactor volume delineating the selection of primary clarification technology is not always easily defined. Development of a commercial scale process for the manufacture of therapeutic proteins requires scale-up from a few liters to thousands of liters. While the separation techniques used for protein purification are largely conserved across scales, the separation techniques for primary cell culture clarification vary with scale. Process models were developed to compare monoclonal antibody production costs using two cell culture clarification technologies. One process model was created for cell culture clarification by disc stack centrifugation with depth filtration. A second process model was created for clarification by multi-stage depth filtration. Analyses were performed to examine the influence of bioreactor volume, product titer, depth filter capacity, and facility utilization on overall operating costs. At bioreactor volumes 5,000 L, clarification using centrifugation followed by depth filtration offers significant cost savings. For bioreactor volumes of ∼ 2,000 L, clarification costs are similar between depth filtration and centrifugation. At this scale, factors including facility utilization, available capital, ease of process development, implementation timelines, and process performance characterization play an important role in clarification technology selection. In the case study presented, a multi-product facility selected multi-stage depth filtration for cell culture clarification at the 500 and 2,000 L scales of operation. Facility implementation timelines, process development activities, equipment commissioning and validation, scale-up effects, and process robustness are examined. © 2013 American Institute of Chemical Engineers.

  14. Bubble point pressures of the selected model system for CatLiq® bio-oil process

    DEFF Research Database (Denmark)

    Toor, Saqib Sohail; Rosendahl, Lasse; Baig, Muhammad Noman

    2010-01-01

    . In this work, the bubble point pressures of a selected model mixture (CO2 + H2O + Ethanol + Acetic acid + Octanoic acid) were measured to investigate the phase boundaries of the CatLiq® process. The bubble points were measured in the JEFRI-DBR high pressure PVT phase behavior system. The experimental results......The CatLiq® process is a second generation catalytic liquefaction process for the production of bio-oil from WDGS (Wet Distillers Grains with Solubles) at subcritical conditions (280-350 oC and 225-250 bar) in the presence of a homogeneous alkaline and a heterogeneous Zirconia catalyst...

  15. Predictive Active Set Selection Methods for Gaussian Processes

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2012-01-01

    We propose an active set selection framework for Gaussian process classification for cases when the dataset is large enough to render its inference prohibitive. Our scheme consists of a two step alternating procedure of active set update rules and hyperparameter optimization based upon marginal...... high impact to the classifier decision process while removing those that are less relevant. We introduce two active set rules based on different criteria, the first one prefers a model with interpretable active set parameters whereas the second puts computational complexity first, thus a model...... with active set parameters that directly control its complexity. We also provide both theoretical and empirical support for our active set selection strategy being a good approximation of a full Gaussian process classifier. Our extensive experiments show that our approach can compete with state...

  16. Selection, calibration, and validation of models of tumor growth.

    Science.gov (United States)

    Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C

    2016-11-01

    This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory

  17. Impact of selected troposphere models on Precise Point Positioning convergence

    Science.gov (United States)

    Kalita, Jakub; Rzepecka, Zofia

    2016-04-01

    The Precise Point Positioning (PPP) absolute method is currently intensively investigated in order to reach fast convergence time. Among various sources that influence the convergence of the PPP, the tropospheric delay is one of the most important. Numerous models of tropospheric delay are developed and applied to PPP processing. However, with rare exceptions, the quality of those models does not allow fixing the zenith path delay tropospheric parameter, leaving difference between nominal and final value to the estimation process. Here we present comparison of several PPP result sets, each of which based on different troposphere model. The respective nominal values are adopted from models: VMF1, GPT2w, MOPS and ZERO-WET. The PPP solution admitted as reference is based on the final troposphere product from the International GNSS Service (IGS). The VMF1 mapping function was used for all processing variants in order to provide capability to compare impact of applied nominal values. The worst case initiates zenith wet delay with zero value (ZERO-WET). Impact from all possible models for tropospheric nominal values should fit inside both IGS and ZERO-WET border variants. The analysis is based on data from seven IGS stations located in mid-latitude European region from year 2014. For the purpose of this study several days with the most active troposphere were selected for each of the station. All the PPP solutions were determined using gLAB open-source software, with the Kalman filter implemented independently by the authors of this work. The processing was performed on 1 hour slices of observation data. In addition to the analysis of the output processing files, the presented study contains detailed analysis of the tropospheric conditions for the selected data. The overall results show that for the height component the VMF1 model outperforms GPT2w and MOPS by 35-40% and ZERO-WET variant by 150%. In most of the cases all solutions converge to the same values during first

  18. Decision Support Model for Selection Technologies in Processing of Palm Oil Industrial Liquid Waste

    Science.gov (United States)

    Ishak, Aulia; Ali, Amir Yazid bin

    2017-12-01

    The palm oil industry continues to grow from year to year. Processing of the palm oil industry into crude palm oil (CPO) and palm kernel oil (PKO). The ratio of the amount of oil produced by both products is 30% of the raw material. This means that 70% is palm oil waste. The amount of palm oil waste will increase in line with the development of the palm oil industry. The amount of waste generated by the palm oil industry if it is not handled properly and effectively will contribute significantly to environmental damage. Industrial activities ranging from raw materials to produce products will disrupt the lives of people around the factory. There are many alternative technologies available to process other industries, but problems that often occur are difficult to implement the most appropriate technology. The purpose of this research is to develop a database of waste processing technology, looking for qualitative and quantitative criteria to select technology and develop Decision Support System (DSS) that can help make decisions. The method used to achieve the objective of this research is to develop a questionnaire to identify waste processing technology and develop the questionnaire to find appropriate database technology. Methods of data analysis performed on the system by using Analytic Hierarchy Process (AHP) and to build the model by using the MySQL Software that can be used as a tool in the evaluation and selection of palm oil mill processing technology.

  19. A SUPPLIER SELECTION MODEL FOR SOFTWARE DEVELOPMENT OUTSOURCING

    Directory of Open Access Journals (Sweden)

    Hancu Lucian-Viorel

    2010-12-01

    Full Text Available This paper presents a multi-criteria decision making model used for supplier selection for software development outsourcing on e-marketplaces. This model can be used in auctions. The supplier selection process becomes complex and difficult on last twenty years since the Internet plays an important role in business management. Companies have to concentrate their efforts on their core activities and the others activities should be realized by outsourcing. They can achieve significant cost reduction by using e-marketplaces in their purchase process and by using decision support systems on supplier selection. In the literature were proposed many approaches for supplier evaluation and selection process. The performance of potential suppliers is evaluated using multi criteria decision making methods rather than considering a single factor cost.

  20. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.; Genton, Marc G.

    2012-01-01

    for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical

  1. Application of Bayesian Model Selection for Metal Yield Models using ALEGRA and Dakota.

    Energy Technology Data Exchange (ETDEWEB)

    Portone, Teresa; Niederhaus, John Henry; Sanchez, Jason James; Swiler, Laura Painton

    2018-02-01

    This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.

  2. Modeling of the thermal physical process and study on the reliability of linear energy density for selective laser melting

    Directory of Open Access Journals (Sweden)

    Zhaowei Xiang

    2018-06-01

    Full Text Available A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM. Keywords: Selective laser melting, Volume shrinkage, Powder-to-dense process, Numerical modeling, Thermal analysis, Linear energy density

  3. A comparative analysis of selected wastewater pretreatment processes in food industry

    Science.gov (United States)

    Jaszczyszyn, Katarzyna; Góra, Wojciech; Dymaczewski, Zbysław; Borowiak, Robert

    2018-02-01

    The article presents a comparative analysis of the classical coagulation with the iron sulphate and adsorption on bentonite for the pretreatment of wastewater in the food industry. As a result of the studies, chemical oxygen demand (COD) and total nitrogen (TN) reduction were found to be comparable in both technologies, and a 29% higher total phosphorus removal efficiency by the coagulation was observed. After the coagulation and adsorption processes, a significant difference between mineral and organic fraction in the sludge was found (49% and 51% for bentonite and 28% and 72% for iron sulphate, respectively).

  4. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  5. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  6. A computational approach to compare regression modelling strategies in prediction research.

    Science.gov (United States)

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  7. A Primer for Model Selection: The Decisive Role of Model Complexity

    Science.gov (United States)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  8. 45 CFR 1305.6 - Selection process.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Selection process. 1305.6 Section 1305.6 Public... PROGRAM ELIGIBILITY, RECRUITMENT, SELECTION, ENROLLMENT AND ATTENDANCE IN HEAD START § 1305.6 Selection process. (a) Each Head Start program must have a formal process for establishing selection criteria and...

  9. Interval-valued intuitionistic fuzzy multi-criteria model for design concept selection

    Directory of Open Access Journals (Sweden)

    Daniel Osezua Aikhuele

    2017-09-01

    Full Text Available This paper presents a new approach for design concept selection by using an integrated Fuzzy Analytical Hierarchy Process (FAHP and an Interval-valued intuitionistic fuzzy modified TOP-SIS (IVIF-modified TOPSIS model. The integrated model which uses the improved score func-tion and a weighted normalized Euclidean distance method for the calculation of the separation measures of alternatives from the positive and negative intuitionistic ideal solutions provides a new approach for the computation of intuitionistic fuzzy ideal solutions. The results of the two approaches are integrated using a reflection defuzzification integration formula. To ensure the feasibility and the rationality of the integrated model, the method is successfully applied for eval-uating and selecting some design related problems including a real-life case study for the selec-tion of the best concept design for a new printed-circuit-board (PCB and for a hypothetical ex-ample. The model which provides a novel alternative, has been compared with similar computa-tional methods in the literature.

  10. Innovation During the Supplier Selection Process

    DEFF Research Database (Denmark)

    Pilkington, Alan; Pedraza, Isabel

    2014-01-01

    Established ideas on supplier selection have not moved much from the original premise of how to choose between bidders. Whilst we have added many different tools and refinements to choose between alternative suppliers, its nature has not evolved. We move the original selection process approach...... observed through an ethnographic embedded researcher study has refined the selection process and has two selection stages one for first supply covering tool/process developed and another later for resupply of mature parts. We report the details of the process, those involved, the criteria employed...... and identify benefits and weaknesses of this enhanced selection process....

  11. Bayesian site selection for fast Gaussian process regression

    KAUST Repository

    Pourhabib, Arash; Liang, Faming; Ding, Yu

    2014-01-01

    Gaussian Process (GP) regression is a popular method in the field of machine learning and computer experiment designs; however, its ability to handle large data sets is hindered by the computational difficulty in inverting a large covariance matrix. Likelihood approximation methods were developed as a fast GP approximation, thereby reducing the computation cost of GP regression by utilizing a much smaller set of unobserved latent variables called pseudo points. This article reports a further improvement to the likelihood approximation methods by simultaneously deciding both the number and locations of the pseudo points. The proposed approach is a Bayesian site selection method where both the number and locations of the pseudo inputs are parameters in the model, and the Bayesian model is solved using a reversible jump Markov chain Monte Carlo technique. Through a number of simulated and real data sets, it is demonstrated that with appropriate priors chosen, the Bayesian site selection method can produce a good balance between computation time and prediction accuracy: it is fast enough to handle large data sets that a full GP is unable to handle, and it improves, quite often remarkably, the prediction accuracy, compared with the existing likelihood approximations. © 2014 Taylor and Francis Group, LLC.

  12. Bayesian site selection for fast Gaussian process regression

    KAUST Repository

    Pourhabib, Arash

    2014-02-05

    Gaussian Process (GP) regression is a popular method in the field of machine learning and computer experiment designs; however, its ability to handle large data sets is hindered by the computational difficulty in inverting a large covariance matrix. Likelihood approximation methods were developed as a fast GP approximation, thereby reducing the computation cost of GP regression by utilizing a much smaller set of unobserved latent variables called pseudo points. This article reports a further improvement to the likelihood approximation methods by simultaneously deciding both the number and locations of the pseudo points. The proposed approach is a Bayesian site selection method where both the number and locations of the pseudo inputs are parameters in the model, and the Bayesian model is solved using a reversible jump Markov chain Monte Carlo technique. Through a number of simulated and real data sets, it is demonstrated that with appropriate priors chosen, the Bayesian site selection method can produce a good balance between computation time and prediction accuracy: it is fast enough to handle large data sets that a full GP is unable to handle, and it improves, quite often remarkably, the prediction accuracy, compared with the existing likelihood approximations. © 2014 Taylor and Francis Group, LLC.

  13. Fit Gap Analysis – The Role of Business Process Reference Models

    Directory of Open Access Journals (Sweden)

    Dejan Pajk

    2013-12-01

    Full Text Available Enterprise resource planning (ERP systems support solutions for standard business processes such as financial, sales, procurement and warehouse. In order to improve the understandability and efficiency of their implementation, ERP vendors have introduced reference models that describe the processes and underlying structure of an ERP system. To select and successfully implement an ERP system, the capabilities of that system have to be compared with a company’s business needs. Based on a comparison, all of the fits and gaps must be identified and further analysed. This step usually forms part of ERP implementation methodologies and is called fit gap analysis. The paper theoretically overviews methods for applying reference models and describes fit gap analysis processes in detail. The paper’s first contribution is its presentation of a fit gap analysis using standard business process modelling notation. The second contribution is the demonstration of a process-based comparison approach between a supply chain process and an ERP system process reference model. In addition to its theoretical contributions, the results can also be practically applied to projects involving the selection and implementation of ERP systems.

  14. Information-Processing Models and Curriculum Design

    Science.gov (United States)

    Calfee, Robert C.

    1970-01-01

    "This paper consists of three sections--(a) the relation of theoretical analyses of learning to curriculum design, (b) the role of information-processing models in analyses of learning processes, and (c) selected examples of the application of information-processing models to curriculum design problems." (Author)

  15. Fast Bayesian Inference in Dirichlet Process Mixture Models.

    Science.gov (United States)

    Wang, Lianming; Dunson, David B

    2011-01-01

    There has been increasing interest in applying Bayesian nonparametric methods in large samples and high dimensions. As Markov chain Monte Carlo (MCMC) algorithms are often infeasible, there is a pressing need for much faster algorithms. This article proposes a fast approach for inference in Dirichlet process mixture (DPM) models. Viewing the partitioning of subjects into clusters as a model selection problem, we propose a sequential greedy search algorithm for selecting the partition. Then, when conjugate priors are chosen, the resulting posterior conditionally on the selected partition is available in closed form. This approach allows testing of parametric models versus nonparametric alternatives based on Bayes factors. We evaluate the approach using simulation studies and compare it with four other fast nonparametric methods in the literature. We apply the proposed approach to three datasets including one from a large epidemiologic study. Matlab codes for the simulation and data analyses using the proposed approach are available online in the supplemental materials.

  16. Preparatory selection of sterilization regime for canned Natural Atlantic Mackerel with oil based on developed mathematical models of the process

    Directory of Open Access Journals (Sweden)

    Maslov A. A.

    2016-12-01

    Full Text Available Definition of preparatory parameters for sterilization regime of canned "Natural Atlantic Mackerel with Oil" is the aim of current study. PRSC software developed at the department of automation and computer engineering is used for preparatory selection. To determine the parameters of process model, in laboratory autoclave AVK-30M the pre-trial process of sterilization and cooling in water with backpressure of canned "Natural Atlantic Mackerel with Oil" in can N 3 has been performed. Gathering information about the temperature in the autoclave sterilization chamber and the can with product has been carried out using Ellab TrackSense PRO loggers. Due to the obtained information three transfer functions for the product model have been identified: in the least heated area of autoclave, the average heated and the most heated. In PRSC programme temporary temperature dependences in the sterilization chamber have been built using this information. The model of sterilization process of canned "Natural Atlantic Mackerel with Oil" has been received after the pre-trial process. Then in the automatic mode the sterilization regime of canned "Natural Atlantic Mackerel with Oil" has been selected using the value of actual effect close to normative sterilizing effect (5.9 conditional minutes. Furthermore, in this study step-mode sterilization of canned "Natural Atlantic Mackerel with Oil" has been selected. Utilization of step-mode sterilization with the maximum temperature equal to 125 °C in the sterilization chamber allows reduce process duration by 10 %. However, the application of this regime in practice requires additional research. Using the described approach based on the developed mathematical models of the process allows receive optimal step and variable canned food sterilization regimes with high energy efficiency and product quality.

  17. Comparing geological and statistical approaches for element selection in sediment tracing research

    Science.gov (United States)

    Laceby, J. Patrick; McMahon, Joe; Evrard, Olivier; Olley, Jon

    2015-04-01

    Elevated suspended sediment loads reduce reservoir capacity and significantly increase the cost of operating water treatment infrastructure, making the management of sediment supply to reservoirs of increasingly importance. Sediment fingerprinting techniques can be used to determine the relative contributions of different sources of sediment accumulating in reservoirs. The objective of this research is to compare geological and statistical approaches to element selection for sediment fingerprinting modelling. Time-integrated samplers (n=45) were used to obtain source samples from four major subcatchments flowing into the Baroon Pocket Dam in South East Queensland, Australia. The geochemistry of potential sources were compared to the geochemistry of sediment cores (n=12) sampled in the reservoir. The geochemical approach selected elements for modelling that provided expected, observed and statistical discrimination between sediment sources. Two statistical approaches selected elements for modelling with the Kruskal-Wallis H-test and Discriminatory Function Analysis (DFA). In particular, two different significance levels (0.05 & 0.35) for the DFA were included to investigate the importance of element selection on modelling results. A distribution model determined the relative contributions of different sources to sediment sampled in the Baroon Pocket Dam. Elemental discrimination was expected between one subcatchment (Obi Obi Creek) and the remaining subcatchments (Lexys, Falls and Bridge Creek). Six major elements were expected to provide discrimination. Of these six, only Fe2O3 and SiO2 provided expected, observed and statistical discrimination. Modelling results with this geological approach indicated 36% (+/- 9%) of sediment sampled in the reservoir cores were from mafic-derived sources and 64% (+/- 9%) were from felsic-derived sources. The geological and the first statistical approach (DFA0.05) differed by only 1% (σ 5%) for 5 out of 6 model groupings with only

  18. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  19. Selected missense mutations impair frataxin processing in Friedreich ataxia.

    Science.gov (United States)

    Clark, Elisia; Butler, Jill S; Isaacs, Charles J; Napierala, Marek; Lynch, David R

    2017-08-01

    Frataxin (FXN) is a highly conserved mitochondrial protein. Reduced FXN levels cause Friedreich ataxia, a recessive neurodegenerative disease. Typical patients carry GAA repeat expansions on both alleles, while a subgroup of patients carry a missense mutation on one allele and a GAA repeat expansion on the other. Here, we report that selected disease-related FXN missense mutations impair FXN localization, interaction with mitochondria processing peptidase, and processing. Immunocytochemical studies and subcellular fractionation were performed to study FXN import into the mitochondria and examine the mechanism by which mutations impair FXN processing. Coimmunoprecipitation was performed to study the interaction between FXN and mitochondrial processing peptidase. A proteasome inhibitor was used to model traditional therapeutic strategies. In addition, clinical profiles of subjects with and without point mutations were compared in a large natural history study. FXN I 154F and FXN G 130V missense mutations decrease FXN 81-210 levels compared with FXN WT , FXN R 165C , and FXN W 155R , but do not block its association with mitochondria. FXN I 154F and FXN G 130V also impair FXN maturation and enhance the binding between FXN 42-210 and mitochondria processing peptidase. Furthermore, blocking proteosomal degradation does not increase FXN 81-210 levels. Additionally, impaired FXN processing also occurs in fibroblasts from patients with FXN G 130V . Finally, clinical data from patients with FXN G 130V and FXN I 154F mutations demonstrates a lower severity compared with other individuals with Friedreich ataxia. These data suggest that the effects on processing associated with FXN G 130V and FXN I 154F mutations lead to higher levels of partially processed FXN, which may contribute to the milder clinical phenotypes in these patients.

  20. A Comparative Study Of Stock Price Forecasting Using Nonlinear Models

    Directory of Open Access Journals (Sweden)

    Diteboho Xaba

    2017-03-01

    Full Text Available This study compared the in-sample forecasting accuracy of three forecasting nonlinear models namely: the Smooth Transition Regression (STR model, the Threshold Autoregressive (TAR model and the Markov-switching Autoregressive (MS-AR model. Nonlinearity tests were used to confirm the validity of the assumptions of the study. The study used model selection criteria, SBC to select the optimal lag order and for the selection of appropriate models. The Mean Square Error (MSE, Mean Absolute Error (MAE and Root Mean Square Error (RMSE served as the error measures in evaluating the forecasting ability of the models. The MS-AR models proved to perform well with lower error measures as compared to LSTR and TAR models in most cases.

  1. Expert System Model for Educational Personnel Selection

    Directory of Open Access Journals (Sweden)

    Héctor A. Tabares-Ospina

    2013-06-01

    Full Text Available The staff selection is a difficult task due to the subjectivity that the evaluation means. This process can be complemented using a system to support decision. This paper presents the implementation of an expert system to systematize the selection process of professors. The management of software development is divided into 4 parts: requirements, design, implementation and commissioning. The proposed system models a specific knowledge through relationships between variables evidence and objective.

  2. Comparative study of void fraction models

    International Nuclear Information System (INIS)

    Borges, R.C.; Freitas, R.L.

    1985-01-01

    Some models for the calculation of void fraction in water in sub-cooled boiling and saturated vertical upward flow with forced convection have been selected and compared with experimental results in the pressure range of 1 to 150 bar. In order to know the void fraction axial distribution it is necessary to determine the net generation of vapour and the fluid temperature distribution in the slightly sub-cooled boiling region. It was verified that the net generation of vapour was well represented by the Saha-Zuber model. The selected models for the void fraction calculation present adequate results but with a tendency to super-estimate the experimental results, in particular the homogeneous models. The drift flux model is recommended, followed by the Armand and Smith models. (F.E.) [pt

  3. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  4. @TOME-2: a new pipeline for comparative modeling of protein–ligand complexes

    Science.gov (United States)

    Pons, Jean-Luc; Labesse, Gilles

    2009-01-01

    @TOME 2.0 is new web pipeline dedicated to protein structure modeling and small ligand docking based on comparative analyses. @TOME 2.0 allows fold recognition, template selection, structural alignment editing, structure comparisons, 3D-model building and evaluation. These tasks are routinely used in sequence analyses for structure prediction. In our pipeline the necessary software is efficiently interconnected in an original manner to accelerate all the processes. Furthermore, we have also connected comparative docking of small ligands that is performed using protein–protein superposition. The input is a simple protein sequence in one-letter code with no comment. The resulting 3D model, protein–ligand complexes and structural alignments can be visualized through dedicated Web interfaces or can be downloaded for further studies. These original features will aid in the functional annotation of proteins and the selection of templates for molecular modeling and virtual screening. Several examples are described to highlight some of the new functionalities provided by this pipeline. The server and its documentation are freely available at http://abcis.cbs.cnrs.fr/AT2/ PMID:19443448

  5. @TOME-2: a new pipeline for comparative modeling of protein-ligand complexes.

    Science.gov (United States)

    Pons, Jean-Luc; Labesse, Gilles

    2009-07-01

    @TOME 2.0 is new web pipeline dedicated to protein structure modeling and small ligand docking based on comparative analyses. @TOME 2.0 allows fold recognition, template selection, structural alignment editing, structure comparisons, 3D-model building and evaluation. These tasks are routinely used in sequence analyses for structure prediction. In our pipeline the necessary software is efficiently interconnected in an original manner to accelerate all the processes. Furthermore, we have also connected comparative docking of small ligands that is performed using protein-protein superposition. The input is a simple protein sequence in one-letter code with no comment. The resulting 3D model, protein-ligand complexes and structural alignments can be visualized through dedicated Web interfaces or can be downloaded for further studies. These original features will aid in the functional annotation of proteins and the selection of templates for molecular modeling and virtual screening. Several examples are described to highlight some of the new functionalities provided by this pipeline. The server and its documentation are freely available at http://abcis.cbs.cnrs.fr/AT2/

  6. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable

  7. Applying Four Different Risk Models in Local Ore Selection

    International Nuclear Information System (INIS)

    Richmond, Andrew

    2002-01-01

    Given the uncertainty in grade at a mine location, a financially risk-averse decision-maker may prefer to incorporate this uncertainty into the ore selection process. A FORTRAN program risksel is presented to calculate local risk-adjusted optimal ore selections using a negative exponential utility function and three dominance models: mean-variance, mean-downside risk, and stochastic dominance. All four methods are demonstrated in a grade control environment. In the case study, optimal selections range with the magnitude of financial risk that a decision-maker is prepared to accept. Except for the stochastic dominance method, the risk models reassign material from higher cost to lower cost processing options as the aversion to financial risk increases. The stochastic dominance model usually was unable to determine the optimal local selection

  8. Model Selection in Continuous Test Norming With GAMLSS.

    Science.gov (United States)

    Voncken, Lieke; Albers, Casper J; Timmerman, Marieke E

    2017-06-01

    To compute norms from reference group test scores, continuous norming is preferred over traditional norming. A suitable continuous norming approach for continuous data is the use of the Box-Cox Power Exponential model, which is found in the generalized additive models for location, scale, and shape. Applying the Box-Cox Power Exponential model for test norming requires model selection, but it is unknown how well this can be done with an automatic selection procedure. In a simulation study, we compared the performance of two stepwise model selection procedures combined with four model-fit criteria (Akaike information criterion, Bayesian information criterion, generalized Akaike information criterion (3), cross-validation), varying data complexity, sampling design, and sample size in a fully crossed design. The new procedure combined with one of the generalized Akaike information criterion was the most efficient model selection procedure (i.e., required the smallest sample size). The advocated model selection procedure is illustrated with norming data of an intelligence test.

  9. A Gambler's Model of Natural Selection.

    Science.gov (United States)

    Nolan, Michael J.; Ostrovsky, David S.

    1996-01-01

    Presents an activity that highlights the mechanism and power of natural selection. Allows students to think in terms of modeling a biological process and instills an appreciation for a mathematical approach to biological problems. (JRH)

  10. Economic assessment model architecture for AGC/AVLIS selection

    International Nuclear Information System (INIS)

    Hoglund, R.L.

    1984-01-01

    The economic assessment model architecture described provides the flexibility and completeness in economic analysis that the selection between AGC and AVLIS demands. Process models which are technology-specific will provide the first-order responses of process performance and cost to variations in process parameters. The economics models can be used to test the impacts of alternative deployment scenarios for a technology. Enterprise models provide global figures of merit for evaluating the DOE perspective on the uranium enrichment enterprise, and business analysis models compute the financial parameters from the private investor's viewpoint

  11. Executive Selection in Government Agencies: An Analysis of the Department of the Navy and Immigration and Naturalization Services Senior Executive Service Selection Processes

    National Research Council Canada - National Science Library

    Jordan, Mark

    2001-01-01

    .... The Senior Executive Service (SES) selection process for the Department of the Navy (DON) is analyzed and compared to the SES selection process used by the Immigration and Naturalization Service...

  12. Analysis of the resolution processes of three modeling tasks

    Directory of Open Access Journals (Sweden)

    Cèsar Gallart Palau

    2017-08-01

    Full Text Available In this paper we present a comparative analysis of the resolution process of three modeling tasks performed by secondary education students (13-14 years, designed from three different points of view: The Modelling-eliciting Activities, the LEMA project, and the Realistic Mathematical Problems. The purpose of this analysis is to obtain a methodological characterization of them in order to provide to secondary education teachers a proper selection and sequencing of tasks for their implementation in the classroom.

  13. A proposed selection process in Over-The-Top project portfolio management

    Directory of Open Access Journals (Sweden)

    Jemy Vestius Confido

    2018-05-01

    Full Text Available Purpose: The purpose of this paper is to propose an Over-The-Top (OTT initiative selection process for communication service providers (CSPs entering an OTT business. Design/methodology/approach: To achieve this objective, a literature review was conducted to comprehend the past and current practices of the project (or initiative selection process as mainly suggested in project portfolio management (PPM. This literature was compared with specific situations and the needs of CSPs when constructing an OTT portfolio. Based on the contrast between the conventional project selection process and specific OTT characteristics, a different selection process is developed and tested using group model-building (GMB, which involved an in-depth interview, a questionnaire and a focus group discussion (FGD. Findings: The paper recommends five distinct steps for CSPs to construct an OTT initiative portfolio: candidate list of OTT initiatives, interdependency diagram, evaluation of all interdependent OTT initiatives, evaluation of all non-interdependent OTT initiatives and optimal portfolio of OTT initiatives. Research limitations/implications: The research is empirical, and various OTT services are implemented; the conclusion is derived only from one CSP, which operates as a group. Generalization of this approach will require further empirical tests on different CSPs, OTT players or any firms performing portfolio selection with a degree of interdependency among the projects. Practical implications: Having considered interdependency, the proposed OTT initiative selection steps can be further implemented by portfolio managers for more effective OTT initiative portfolio construction. Originality/value: While the previous literature and common practices suggest ensuring the benefits (mainly financial of individual projects, this research accords higher priority to the success of the overall OTT initiative portfolio and recommends that an evaluation of the overall

  14. Efficient spiking neural network model of pattern motion selectivity in visual cortex.

    Science.gov (United States)

    Beyeler, Michael; Richert, Micah; Dutt, Nikil D; Krichmar, Jeffrey L

    2014-07-01

    Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction-selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40 × 40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available.

  15. SELECTION OF NON-CONVENTIONAL MACHINING PROCESSES USING THE OCRA METHOD

    Directory of Open Access Journals (Sweden)

    Miloš Madić

    2015-04-01

    Full Text Available Selection of the most suitable nonconventional machining process (NCMP for a given machining application can be viewed as multi-criteria decision making (MCDM problem with many conflicting and diverse criteria. To aid these selection processes, different MCDM methods have been proposed. This paper introduces the use of an almost unexplored MCDM method, i.e. operational competitiveness ratings analysis (OCRA method for solving the NCMP selection problems. Applicability, suitability and computational procedure of OCRA method have been demonstrated while solving three case studies dealing with selection of the most suitable NCMP. In each case study the obtained rankings were compared with those derived by the past researchers using different MCDM methods. The results obtained using the OCRA method have good correlation with those derived by the past researchers which validate the usefulness of this method while solving complex NCMP selection problems.

  16. Country Selection Model for Sustainable Construction Businesses Using Hybrid of Objective and Subjective Information

    Directory of Open Access Journals (Sweden)

    Kang-Wook Lee

    2017-05-01

    Full Text Available An important issue for international businesses and academia is selecting countries in which to expand in order to achieve entrepreneurial sustainability. This study develops a country selection model for sustainable construction businesses using both objective and subjective information. The objective information consists of 14 variables related to country risk and project performance in 32 countries over 25 years. This hybrid model applies subjective weighting from industrial experts to objective information using a fuzzy LinPreRa-based Analytic Hierarchy Process. The hybrid model yields a more accurate country selection compared to a purely objective information-based model in experienced countries. Interestingly, the hybrid model provides some different predictions with only subjective opinions in unexperienced countries, which implies that expert opinion is not always reliable. In addition, feedback from five experts in top international companies is used to validate the model’s completeness, effectiveness, generality, and applicability. The model is expected to aid decision makers in selecting better candidate countries that lead to sustainable business success.

  17. X-33 Telemetry Best Source Selection, Processing, Display, and Simulation Model Comparison

    Science.gov (United States)

    Burkes, Darryl A.

    1998-01-01

    The X-33 program requires the use of multiple telemetry ground stations to cover the launch, ascent, transition, descent, and approach phases for the flights from Edwards AFB to landings at Dugway Proving Grounds, UT and Malmstrom AFB, MT. This paper will discuss the X-33 telemetry requirements and design, including information on fixed and mobile telemetry systems, best source selection, and support for Range Safety Officers. A best source selection system will be utilized to automatically determine the best source based on the frame synchronization status of the incoming telemetry streams. These systems will be used to select the best source at the landing sites and at NASA Dryden Flight Research Center to determine the overall best source between the launch site, intermediate sites, and landing site sources. The best source at the landing sites will be decommutated to display critical flight safety parameters for the Range Safety Officers. The overall best source will be sent to the Lockheed Martin's Operational Control Center at Edwards AFB for performance monitoring by X-33 program personnel and for monitoring of critical flight safety parameters by the primary Range Safety Officer. The real-time telemetry data (received signal strength, etc.) from each of the primary ground stations will also be compared during each nu'ssion with simulation data generated using the Dynamic Ground Station Analysis software program. An overall assessment of the accuracy of the model will occur after each mission. Acknowledgment: The work described in this paper was NASA supported through cooperative agreement NCC8-115 with Lockheed Martin Skunk Works.

  18. Comparative evaluation of life cycle assessment models for solid waste management

    International Nuclear Information System (INIS)

    Winkler, Joerg; Bilitewski, Bernd

    2007-01-01

    This publication compares a selection of six different models developed in Europe and America by research organisations, industry associations and governmental institutions. The comparison of the models reveals the variations in the results and the differences in the conclusions of an LCA study done with these models. The models are compared by modelling a specific case - the waste management system of Dresden, Germany - with each model and an in-detail comparison of the life cycle inventory results. Moreover, a life cycle impact assessment shows if the LCA results of each model allows for comparable and consecutive conclusions, which do not contradict the conclusions derived from the other models' results. Furthermore, the influence of different level of detail in the life cycle inventory of the life cycle assessment is demonstrated. The model comparison revealed that the variations in the LCA results calculated by the models for the case show high variations and are not negligible. In some cases the high variations in results lead to contradictory conclusions concerning the environmental performance of the waste management processes. The static, linear modelling approach chosen by all models analysed is inappropriate for reflecting actual conditions. Moreover, it was found that although the models' approach to LCA is comparable on a general level, the level of detail implemented in the software tools is very different

  19. Cloud decision model for selecting sustainable energy crop based on linguistic intuitionistic information

    Science.gov (United States)

    Peng, Hong-Gang; Wang, Jian-Qiang

    2017-11-01

    In recent years, sustainable energy crop has become an important energy development strategy topic in many countries. Selecting the most sustainable energy crop is a significant problem that must be addressed during any biofuel production process. The focus of this study is the development of an innovative multi-criteria decision-making (MCDM) method to handle sustainable energy crop selection problems. Given that various uncertain data are encountered in the evaluation of sustainable energy crops, linguistic intuitionistic fuzzy numbers (LIFNs) are introduced to present the information necessary to the evaluation process. Processing qualitative concepts requires the effective support of reliable tools; then, a cloud model can be used to deal with linguistic intuitionistic information. First, LIFNs are converted and a novel concept of linguistic intuitionistic cloud (LIC) is proposed. The operations, score function and similarity measurement of the LICs are defined. Subsequently, the linguistic intuitionistic cloud density-prioritised weighted Heronian mean operator is developed, which served as the basis for the construction of an applicable MCDM model for sustainable energy crop selection. Finally, an illustrative example is provided to demonstrate the proposed method, and its feasibility and validity are further verified by comparing it with other existing methods.

  20. Evidence accumulation as a model for lexical selection.

    Science.gov (United States)

    Anders, R; Riès, S; van Maanen, L; Alario, F X

    2015-11-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Comparative Analysis of AHP-TOPSIS and Fuzzy AHP Models in Selecting Appropriate Nanocomposites for Environmental Noise Barrier Applications

    Science.gov (United States)

    Naderzadeh, Mahdiyeh; Arabalibeik, Hossein; Monazzam, Mohammad Reza; Ghasemi, Ismaeil

    Choosing the right material in the design of environmental noise barriers has always been a challenging issue in acoustics. In less-developed countries, the material selection is affected by many factors from various aspects, which makes the decision-making very complicated. This study attempts to compare and assign weights to the most important indices affecting the choice of appropriate noise barrier material. These criteria include absorption coefficient, transparency, tensile modulus, strength at yield, elongation at break, impact strength, flexural modulus, hardness, and cost. For this purpose, experts' opinions was gathered through a total of 13 questionnaires and used for assigning weights by Analytic Hierarchy Process (AHP) and Fuzzy Analytic Hierarchy process (FAHP) techniques. According to the AHP results, impact strength, with only a minor difference of 0.093 compared to the AHP, was recognized as the most important criterion. Finally, the optimal composite material was selected using two different methods; first by Technique for Order-Preference by Similarity to Ideal Solution (TOPSIS) based on the weights obtained from AHP, and next by directly applying the obtained weights from FAHP to the true measured values of parameters. As the results show, in both abovementioned methods, Polycarbonate-SiO2 0.3% with roughened surface (PCSI3-R) received the highest score and was selected as the preferred composite material. Given the close similarity of the results, to determine the superiority of one method over the other, some noise was added to the original data set from the mechanical and acoustic tests and then the variance of the changes in the final orders of preferences was calculated. This indicates the robustness of the method against the measurement errors and noise. The results shows that under the same circumstances, the overall order shift variance in the classic TOPSIS is six times higher than that of the fuzzy AHP method.

  2. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    Science.gov (United States)

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  3. Selective Interference on the Holistic Processing of Faces in Working Memory

    Science.gov (United States)

    Cheung, Olivia S.; Gauthier, Isabel

    2010-01-01

    Faces and objects of expertise compete for early perceptual processes and holistic processing resources (Gauthier, Curran, Curby, & Collins, 2003). Here, we examined the nature of interference on holistic face processing in working memory by comparing how various types of loads affect selective attention to parts of face composites. In dual…

  4. Multi-enzyme Process Modeling

    DEFF Research Database (Denmark)

    Andrade Santacoloma, Paloma de Gracia

    are affected (in a positive or negative way) by the presence of the other enzymes and compounds in the media. In this thesis the concept of multi-enzyme in-pot term is adopted for processes that are carried out by the combination of enzymes in a single reactor and implemented at pilot or industrial scale...... features of the process and provides the information required to structure the process model by using a step-by-step procedure with the required tools and methods. In this way, this framework increases efficiency of the model development process with respect to time and resources needed (fast and effective....... In this way the model parameters that drives the main dynamic behavior can be identified and thus a better understanding of this type of processes. In order to develop, test and verify the methodology, three case studies were selected, specifically the bi-enzyme process for the production of lactobionic acid...

  5. Comparing Patterns of Natural Selection across Species Using Selective Signatures

    Energy Technology Data Exchange (ETDEWEB)

    Shapiro, Jesse; Alm, Eric J.

    2007-12-01

    Comparing gene expression profiles over many different conditions has led to insights that were not obvious from single experiments. In the same way, comparing patterns of natural selection across a set of ecologically distinct species may extend what can be learned from individual genome-wide surveys. Toward this end, we show how variation in protein evolutionary rates, after correcting for genome-wide effects such as mutation rate and demographic factors, can be used to estimate the level and types of natural selection acting on genes across different species. We identify unusually rapidly and slowly evolving genes, relative to empirically derived genome-wide and gene family-specific background rates for 744 core protein families in 30 c-proteobacterial species. We describe the pattern of fast or slow evolution across species as the"selective signature" of a gene. Selective signatures represent aprofile of selection across species that is predictive of gene function: pairs of genes with correlated selective signatures are more likely to share the same cellular function, and genes in the same pathway can evolve in concert. For example,glycolysis and phenylalanine metabolism genes evolve rapidly in Idiomarina loihiensis, mirroring an ecological shift in carbon source from sugars to amino acids. In a broader context, our results suggest that the genomic landscape is organized into functional modules even at the level of natural selection, and thus it may be easier than expected to understand the complex evolutionary pressures on a cell.

  6. Comparing Patterns of Natural Selection Across Species Using Selective Signatures

    Energy Technology Data Exchange (ETDEWEB)

    Alm, Eric J.; Shapiro, B. Jesse; Alm, Eric J.

    2007-12-18

    Comparing gene expression profiles over many different conditions has led to insights that were not obvious from single experiments. In the same way, comparing patterns of natural selection across a set of ecologically distinct species may extend what can be learned from individual genome-wide surveys. Toward this end, we show how variation in protein evolutionary rates, after correcting for genome-wide effects such as mutation rate and demographic factors, can be used to estimate the level and types of natural selection acting on genes across different species. We identify unusually rapidly and slowly evolving genes, relative to empirically derived genome-wide and gene family-specific background rates for 744 core protein families in 30 gamma-proteobacterial species. We describe the pattern of fast or slow evolution across species as the 'selective signature' of a gene. Selective signatures represent a profile of selection across species that is predictive of gene function: pairs of genes with correlated selective signatures are more likely to share the same cellular function, and genes in the same pathway can evolve in concert. For example, glycolysis and phenylalanine metabolism genes evolve rapidly in Idiomarina loihiensis, mirroring an ecological shift in carbon source from sugars to amino acids. In a broader context, our results suggest that the genomic landscape is organized into functional modules even at the level of natural selection, and thus it may be easier than expected to understand the complex evolutionary pressures on a cell.

  7. Application of PROMETHEE-GAIA method for non-traditional machining processes selection

    Directory of Open Access Journals (Sweden)

    Prasad Karande

    2012-10-01

    Full Text Available With ever increasing demand for manufactured products of hard alloys and metals with high surface finish and complex shape geometry, more interest is now being paid to non-traditional machining (NTM processes, where energy in its direct form is used to remove material from workpiece surface. Compared to conventional machining processes, NTM processes possess almost unlimited capabilities and there is a strong believe that use of NTM processes would go on increasing in diverse range of applications. Presence of a large number of NTM processes along with complex characteristics and capabilities, and lack of experts in NTM process selection domain compel for development of a structured approach for NTM process selection for a given machining application. Past researchers have already attempted to solve NTM process selection problems using various complex mathematical approaches which often require a profound knowledge in mathematics/artificial intelligence from the part of process engineers. In this paper, four NTM process selection problems are solved using an integrated PROMETHEE (preference ranking organization method for enrichment evaluation and GAIA (geometrical analysis for interactive aid method which would act as a visual decision aid to the process engineers. The observed results are quite satisfactory and exactly match with the expected solutions.

  8. Effects of the Ordering of Natural Selection and Population Regulation Mechanisms on Wright-Fisher Models.

    Science.gov (United States)

    He, Zhangyi; Beaumont, Mark; Yu, Feng

    2017-07-05

    We explore the effect of different mechanisms of natural selection on the evolution of populations for one- and two-locus systems. We compare the effect of viability and fecundity selection in the context of the Wright-Fisher model with selection under the assumption of multiplicative fitness. We show that these two modes of natural selection correspond to different orderings of the processes of population regulation and natural selection in the Wright-Fisher model. We find that under the Wright-Fisher model these two different orderings can affect the distribution of trajectories of haplotype frequencies evolving with genetic recombination. However, the difference in the distribution of trajectories is only appreciable when the population is in significant linkage disequilibrium. We find that as linkage disequilibrium decays the trajectories for the two different models rapidly become indistinguishable. We discuss the significance of these findings in terms of biological examples of viability and fecundity selection, and speculate that the effect may be significant when factors such as gene migration maintain a degree of linkage disequilibrium. Copyright © 2017 He et al.

  9. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    Science.gov (United States)

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  10. Selection of power market structure using the analytic hierarchy process

    International Nuclear Information System (INIS)

    Subhes Bhattacharyya; Prasanta Kumar Dey

    2003-01-01

    Selection of a power market structure from the available alternatives is an important activity within an overall power sector reform program. The evaluation criteria for selection are both subjective as well as objective in nature and the selection of alternatives is characterised by their conflicting nature. This study demonstrates a methodology for power market structure selection using the analytic hierarchy process, a multiple attribute decision- making technique, to model the selection methodology with the active participation of relevant stakeholders in a workshop environment. The methodology is applied to a hypothetical case of a State Electricity Board reform in India. (author)

  11. Fundamental Aspects of Selective Melting Additive Manufacturing Processes

    Energy Technology Data Exchange (ETDEWEB)

    van Swol, Frank B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miller, James E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-12-01

    Certain details of the additive manufacturing process known as selective laser melting (SLM) affect the performance of the final metal part. To unleash the full potential of SLM it is crucial that the process engineer in the field receives guidance about how to select values for a multitude of process variables employed in the building process. These include, for example, the type of powder (e.g., size distribution, shape, type of alloy), orientation of the build axis, the beam scan rate, the beam power density, the scan pattern and scan rate. The science-based selection of these settings con- stitutes an intrinsically challenging multi-physics problem involving heating and melting a metal alloy, reactive, dynamic wetting followed by re-solidification. In addition, inherent to the process is its considerable variability that stems from the powder packing. Each time a limited number of powder particles are placed, the stacking is intrinsically different from the previous, possessing a different geometry, and having a different set of contact areas with the surrounding particles. As a result, even if all other process parameters (scan rate, etc) are exactly the same, the shape and contact geometry and area of the final melt pool will be unique to that particular configuration. This report identifies the most important issues facing SLM, discusses the fundamental physics associated with it and points out how modeling can support the additive manufacturing efforts.

  12. Multiphysics modelling of manufacturing processes: A review

    DEFF Research Database (Denmark)

    Jabbari, Masoud; Baran, Ismet; Mohanty, Sankhya

    2018-01-01

    Numerical modelling is increasingly supporting the analysis and optimization of manufacturing processes in the production industry. Even if being mostly applied to multistep processes, single process steps may be so complex by nature that the needed models to describe them must include multiphysics...... the diversity in the field of modelling of manufacturing processes as regards process, materials, generic disciplines as well as length scales: (1) modelling of tape casting for thin ceramic layers, (2) modelling the flow of polymers in extrusion, (3) modelling the deformation process of flexible stamps...... for nanoimprint lithography, (4) modelling manufacturing of composite parts and (5) modelling the selective laser melting process. For all five examples, the emphasis is on modelling results as well as describing the models in brief mathematical details. Alongside with relevant references to the original work...

  13. Analysis of Using Resources in Business Process Modeling and Simulation

    Directory of Open Access Journals (Sweden)

    Vasilecas Olegas

    2014-12-01

    Full Text Available One of the key purposes of Business Process Model and Notation (BPMN is to support graphical representation of the process model. However, such models have a lack of support for the graphical representation of resources, whose processes are used during simulation or execution of process instance. The paper analyzes different methods and their extensions for resource modeling. Further, this article presents a selected set of resource properties that are relevant for resource modeling. The paper proposes an approach that explains how to use the selected set of resource properties for extension of process modeling using BPMN and simulation tools. They are based on BPMN, where business process instances use resources in a concurrency manner.

  14. Variable Selection for Regression Models of Percentile Flows

    Science.gov (United States)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

  15. Process chain modeling and selection in an additive manufacturing context

    DEFF Research Database (Denmark)

    Thompson, Mary Kathryn; Stolfi, Alessandro; Mischkot, Michael

    2016-01-01

    This paper introduces a new two-dimensional approach to modeling manufacturing process chains. This approach is used to consider the role of additive manufacturing technologies in process chains for a part with micro scale features and no internal geometry. It is shown that additive manufacturing...... evolving fields like additive manufacturing....

  16. Comparative study of resist stabilization techniques for metal etch processing

    Science.gov (United States)

    Becker, Gerry; Ross, Matthew F.; Wong, Selmer S.; Minter, Jason P.; Marlowe, Trey; Livesay, William R.

    1999-06-01

    This study investigates resist stabilization techniques as they are applied to a metal etch application. The techniques that are compared are conventional deep-UV/thermal stabilization, or UV bake, and electron beam stabilization. The electron beam tool use din this study, an ElectronCure system from AlliedSignal Inc., ELectron Vision Group, utilizes a flood electron source and a non-thermal process. These stabilization techniques are compared with respect to a metal etch process. In this study, two types of resist are considered for stabilization and etch: a g/i-line resist, Shipley SPR-3012, and an advanced i-line, Shipley SPR 955- Cm. For each of these resist the effects of stabilization on resist features are evaluated by post-stabilization SEM analysis. Etch selectivity in all cases is evaluated by using a timed metal etch, and measuring resists remaining relative to total metal thickness etched. Etch selectivity is presented as a function of stabilization condition. Analyses of the effects of the type of stabilization on this method of selectivity measurement are also presented. SEM analysis was also performed on the features after a compete etch process, and is detailed as a function of stabilization condition. Post-etch cleaning is also an important factor impacted by pre-etch resist stabilization. Results of post- etch cleaning are presented for both stabilization methods. SEM inspection is also detailed for the metal features after resist removal processing.

  17. Adverse selection model regarding tobacco consumption

    Directory of Open Access Journals (Sweden)

    Dumitru MARIN

    2006-01-01

    Full Text Available The impact of introducing a tax on tobacco consumption can be studied trough an adverse selection model. The objective of the model presented in the following is to characterize the optimal contractual relationship between the governmental authorities and the two type employees: smokers and non-smokers, taking into account that the consumers’ decision to smoke or not represents an element of risk and uncertainty. Two scenarios are run using the General Algebraic Modeling Systems software: one without taxes set on tobacco consumption and another one with taxes set on tobacco consumption, based on an adverse selection model described previously. The results of the two scenarios are compared in the end of the paper: the wage earnings levels and the social welfare in case of a smoking agent and in case of a non-smoking agent.

  18. 7 CFR 3570.68 - Selection process.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Selection process. 3570.68 Section 3570.68 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE, DEPARTMENT OF AGRICULTURE COMMUNITY PROGRAMS Community Facilities Grant Program § 3570.68 Selection process. Each request...

  19. 44 CFR 150.7 - Selection process.

    Science.gov (United States)

    2010-10-01

    ... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Selection process. 150.7 Section 150.7 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF... Selection process. (a) President's Award. Nominations for the President's Award shall be reviewed, and...

  20. Neural Underpinnings of Decision Strategy Selection: A Review and a Theoretical Model.

    Science.gov (United States)

    Wichary, Szymon; Smolen, Tomasz

    2016-01-01

    In multi-attribute choice, decision makers use decision strategies to arrive at the final choice. What are the neural mechanisms underlying decision strategy selection? The first goal of this paper is to provide a literature review on the neural underpinnings and cognitive models of decision strategy selection and thus set the stage for a neurocognitive model of this process. The second goal is to outline such a unifying, mechanistic model that can explain the impact of noncognitive factors (e.g., affect, stress) on strategy selection. To this end, we review the evidence for the factors influencing strategy selection, the neural basis of strategy use and the cognitive models of this process. We also present the Bottom-Up Model of Strategy Selection (BUMSS). The model assumes that the use of the rational Weighted Additive strategy and the boundedly rational heuristic Take The Best can be explained by one unifying, neurophysiologically plausible mechanism, based on the interaction of the frontoparietal network, orbitofrontal cortex, anterior cingulate cortex and the brainstem nucleus locus coeruleus. According to BUMSS, there are three processes that form the bottom-up mechanism of decision strategy selection and lead to the final choice: (1) cue weight computation, (2) gain modulation, and (3) weighted additive evaluation of alternatives. We discuss how these processes might be implemented in the brain, and how this knowledge allows us to formulate novel predictions linking strategy use and neural signals.

  1. Neural Underpinnings of Decision Strategy Selection: A Review and a Theoretical Model

    Science.gov (United States)

    Wichary, Szymon; Smolen, Tomasz

    2016-01-01

    In multi-attribute choice, decision makers use decision strategies to arrive at the final choice. What are the neural mechanisms underlying decision strategy selection? The first goal of this paper is to provide a literature review on the neural underpinnings and cognitive models of decision strategy selection and thus set the stage for a neurocognitive model of this process. The second goal is to outline such a unifying, mechanistic model that can explain the impact of noncognitive factors (e.g., affect, stress) on strategy selection. To this end, we review the evidence for the factors influencing strategy selection, the neural basis of strategy use and the cognitive models of this process. We also present the Bottom-Up Model of Strategy Selection (BUMSS). The model assumes that the use of the rational Weighted Additive strategy and the boundedly rational heuristic Take The Best can be explained by one unifying, neurophysiologically plausible mechanism, based on the interaction of the frontoparietal network, orbitofrontal cortex, anterior cingulate cortex and the brainstem nucleus locus coeruleus. According to BUMSS, there are three processes that form the bottom-up mechanism of decision strategy selection and lead to the final choice: (1) cue weight computation, (2) gain modulation, and (3) weighted additive evaluation of alternatives. We discuss how these processes might be implemented in the brain, and how this knowledge allows us to formulate novel predictions linking strategy use and neural signals. PMID:27877103

  2. Neural underpinnings of decision strategy selection: a review and a theoretical model

    Directory of Open Access Journals (Sweden)

    Szymon Wichary

    2016-11-01

    Full Text Available In multi-attribute choice, decision makers use various decision strategies to arrive at the final choice. What are the neural mechanisms underlying decision strategy selection? The first goal of this paper is to provide a literature review on the neural underpinnings and cognitive models of decision strategy selection and thus set the stage for a unifying neurocognitive model of this process. The second goal is to outline such a unifying, mechanistic model that can explain the impact of noncognitive factors (e.g. affect, stress on strategy selection. To this end, we review the evidence for the factors influencing strategy selection, the neural basis of strategy use and the cognitive models explaining this process. We also present the neurocognitive Bottom-Up Model of Strategy Selection (BUMSS. The model assumes that the use of the rational, normative Weighted Additive strategy and the boundedly rational heuristic Take The Best can be explained by one unifying, neurophysiologically plausible mechanism, based on the interaction of the frontoparietal network, orbitofrontal cortex, anterior cingulate cortex and the brainstem nucleus locus coeruleus. According to BUMSS, there are three processes that form the bottom-up mechanism of decision strategy selection and lead to the final choice: 1 cue weight computation, 2 gain modulation, and 3 weighted additive evaluation of alternatives. We discuss how these processes might be implemented in the brain, and how this knowledge allows us to formulate novel predictions linking strategy use and neurophysiological indices.

  3. Comparing statistical and process-based flow duration curve models in ungauged basins and changing rain regimes

    Science.gov (United States)

    Müller, M. F.; Thompson, S. E.

    2016-02-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drivers of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by frequent wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are favored over statistical models.

  4. Long-Term Prognostic Validity of Talent Selections: Comparing National and Regional Coaches, Laypersons and Novices

    Science.gov (United States)

    Schorer, Jörg; Rienhoff, Rebecca; Fischer, Lennart; Baker, Joseph

    2017-01-01

    In most sports, the development of elite athletes is a long-term process of talent identification and support. Typically, talent selection systems administer a multi-faceted strategy including national coach observations and varying physical and psychological tests when deciding who is chosen for talent development. The aim of this exploratory study was to evaluate the prognostic validity of talent selections by varying groups 10 years after they had been conducted. This study used a unique, multi-phased approach. Phase 1 involved players (n = 68) in 2001 completing a battery of general and sport-specific tests of handball ‘talent’ and performance. In Phase 2, national and regional coaches (n = 7) in 2001 who attended training camps identified the most talented players. In Phase 3, current novice and advanced handball players (n = 12 in each group) selected the most talented from short videos of matches played during the talent camp. Analyses compared predictions among all groups with a best model-fit derived from the motor tests. Results revealed little difference between regional and national coaches in the prediction of future performance and little difference in forecasting performance between novices and players. The best model-fit regression by the motor-tests outperformed all predictions. While several limitations are discussed, this study is a useful starting point for future investigations considering athlete selection decisions in talent identification in sport. PMID:28744238

  5. Long-Term Prognostic Validity of Talent Selections: Comparing National and Regional Coaches, Laypersons and Novices

    Directory of Open Access Journals (Sweden)

    Jörg Schorer

    2017-07-01

    Full Text Available In most sports, the development of elite athletes is a long-term process of talent identification and support. Typically, talent selection systems administer a multi-faceted strategy including national coach observations and varying physical and psychological tests when deciding who is chosen for talent development. The aim of this exploratory study was to evaluate the prognostic validity of talent selections by varying groups 10 years after they had been conducted. This study used a unique, multi-phased approach. Phase 1 involved players (n = 68 in 2001 completing a battery of general and sport-specific tests of handball ‘talent’ and performance. In Phase 2, national and regional coaches (n = 7 in 2001 who attended training camps identified the most talented players. In Phase 3, current novice and advanced handball players (n = 12 in each group selected the most talented from short videos of matches played during the talent camp. Analyses compared predictions among all groups with a best model-fit derived from the motor tests. Results revealed little difference between regional and national coaches in the prediction of future performance and little difference in forecasting performance between novices and players. The best model-fit regression by the motor-tests outperformed all predictions. While several limitations are discussed, this study is a useful starting point for future investigations considering athlete selection decisions in talent identification in sport.

  6. Long-Term Prognostic Validity of Talent Selections: Comparing National and Regional Coaches, Laypersons and Novices.

    Science.gov (United States)

    Schorer, Jörg; Rienhoff, Rebecca; Fischer, Lennart; Baker, Joseph

    2017-01-01

    In most sports, the development of elite athletes is a long-term process of talent identification and support. Typically, talent selection systems administer a multi-faceted strategy including national coach observations and varying physical and psychological tests when deciding who is chosen for talent development. The aim of this exploratory study was to evaluate the prognostic validity of talent selections by varying groups 10 years after they had been conducted. This study used a unique, multi-phased approach. Phase 1 involved players ( n = 68) in 2001 completing a battery of general and sport-specific tests of handball 'talent' and performance. In Phase 2, national and regional coaches ( n = 7) in 2001 who attended training camps identified the most talented players. In Phase 3, current novice and advanced handball players ( n = 12 in each group) selected the most talented from short videos of matches played during the talent camp. Analyses compared predictions among all groups with a best model-fit derived from the motor tests. Results revealed little difference between regional and national coaches in the prediction of future performance and little difference in forecasting performance between novices and players. The best model-fit regression by the motor-tests outperformed all predictions. While several limitations are discussed, this study is a useful starting point for future investigations considering athlete selection decisions in talent identification in sport.

  7. The selective processing of emotional visual stimuli while detecting auditory targets: an ERP analysis.

    Science.gov (United States)

    Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2008-09-16

    Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.

  8. Divided versus selective attention: evidence for common processing mechanisms.

    Science.gov (United States)

    Hahn, Britta; Wolkenberg, Frank A; Ross, Thomas J; Myers, Carol S; Heishman, Stephen J; Stein, Dan J; Kurup, Pradeep K; Stein, Elliot A

    2008-06-18

    The current study revisited the question of whether there are brain mechanisms specific to divided attention that differ from those used in selective attention. Increased neuronal activity required to simultaneously process two stimulus dimensions as compared with each separate dimension has often been observed, but rarely has activity induced by a divided attention condition exceeded the sum of activity induced by the component tasks. Healthy participants performed a selective-divided attention paradigm while undergoing functional Magnetic Resonance Imaging (fMRI). The task required participants to make a same-different judgment about either one of two simultaneously presented stimulus dimensions, or about both dimensions. Performance accuracy was equated between tasks by dynamically adjusting the stimulus display time. Blood Oxygenation Level Dependent (BOLD) signal differences between tasks were identified by whole-brain voxel-wise comparisons and by region-specific analyses of all areas modulated by the divided attention task (DIV). No region displayed greater activation or deactivation by DIV than the sum of signal change by the two selective attention tasks. Instead, regional activity followed the tasks' processing demands as reflected by reaction time. Only a left cerebellar region displayed a correlation between participants' BOLD signal intensity and reaction time that was selective for DIV. The correlation was positive, reflecting slower responding with greater activation. Overall, the findings do not support the existence of functional brain activity specific to DIV. Increased activity appears to reflect additional processing demands by introducing a secondary task, but those demands do not appear to qualitatively differ from processes of selective attention.

  9. Manufacturing plant location selection in logistics network using Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Ping-Yu Chang

    2015-11-01

    Full Text Available Purpose: In recent years, numerous companies have moved their manufacturing plants to China to capitalize on lower cost and tax. Plant location has such an impact on cost, stocks, and logistics network but location selection in the company is usually based on subjective preference of high ranking managers. Such a decision-making process might result in selecting a location with a lower fixed cost but a higher operational cost. Therefore, this research adapts real data from an electronics company to develop a framework that incorporates both quantitative and qualitative factors for selecting new plant locations. Design/methodology/approach: In-depth interviews were conducted with 12 high rank managers (7 of them are department manager, 2 of them are vice-president, 1 of them is senior engineer, and 2 of them are plant manager in the departments of construction, finance, planning, production, and warehouse to determine the important factors. A questionnaire survey is then conducted for comparing factors which are analyzed using the Analytic Hierarchy Process (AHP. Findings: Results show that the best location chosen by the developed framework coincides well with the company’s primal production base. The results have been presented to the company’s high ranking managers for realizing the accuracy of the framework. Positive responses of the managers indicate usefulness of implementing the proposed model into reality, which adds to the value of this research. Practical implications: The proposed framework can save numerous time-consuming meetings called to compromise opinions and conflictions from different departments in location selection. Originality/value: This paper adapts the Analytic Hierarchy Process (AHP to incorporate quantitative and qualitative factors which are obtained through in-depth interviews with high rank managers in a company into the location decision.

  10. Explicit attention interferes with selective emotion processing in human extrastriate cortex

    Directory of Open Access Journals (Sweden)

    Junghöfer Markus

    2007-02-01

    Full Text Available Abstract Background Brain imaging and event-related potential studies provide strong evidence that emotional stimuli guide selective attention in visual processing. A reflection of the emotional attention capture is the increased Early Posterior Negativity (EPN for pleasant and unpleasant compared to neutral images (~150–300 ms poststimulus. The present study explored whether this early emotion discrimination reflects an automatic phenomenon or is subject to interference by competing processing demands. Thus, emotional processing was assessed while participants performed a concurrent feature-based attention task varying in processing demands. Results Participants successfully performed the primary visual attention task as revealed by behavioral performance and selected event-related potential components (Selection Negativity and P3b. Replicating previous results, emotional modulation of the EPN was observed in a task condition with low processing demands. In contrast, pleasant and unpleasant pictures failed to elicit increased EPN amplitudes compared to neutral images in more difficult explicit attention task conditions. Further analyses determined that even the processing of pleasant and unpleasant pictures high in emotional arousal is subject to interference in experimental conditions with high task demand. Taken together, performing demanding feature-based counting tasks interfered with differential emotion processing indexed by the EPN. Conclusion The present findings demonstrate that taxing processing resources by a competing primary visual attention task markedly attenuated the early discrimination of emotional from neutral picture contents. Thus, these results provide further empirical support for an interference account of the emotion-attention interaction under conditions of competition. Previous studies revealed the interference of selective emotion processing when attentional resources were directed to locations of explicitly task

  11. Explicit attention interferes with selective emotion processing in human extrastriate cortex.

    Science.gov (United States)

    Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2007-02-22

    Brain imaging and event-related potential studies provide strong evidence that emotional stimuli guide selective attention in visual processing. A reflection of the emotional attention capture is the increased Early Posterior Negativity (EPN) for pleasant and unpleasant compared to neutral images (approximately 150-300 ms poststimulus). The present study explored whether this early emotion discrimination reflects an automatic phenomenon or is subject to interference by competing processing demands. Thus, emotional processing was assessed while participants performed a concurrent feature-based attention task varying in processing demands. Participants successfully performed the primary visual attention task as revealed by behavioral performance and selected event-related potential components (Selection Negativity and P3b). Replicating previous results, emotional modulation of the EPN was observed in a task condition with low processing demands. In contrast, pleasant and unpleasant pictures failed to elicit increased EPN amplitudes compared to neutral images in more difficult explicit attention task conditions. Further analyses determined that even the processing of pleasant and unpleasant pictures high in emotional arousal is subject to interference in experimental conditions with high task demand. Taken together, performing demanding feature-based counting tasks interfered with differential emotion processing indexed by the EPN. The present findings demonstrate that taxing processing resources by a competing primary visual attention task markedly attenuated the early discrimination of emotional from neutral picture contents. Thus, these results provide further empirical support for an interference account of the emotion-attention interaction under conditions of competition. Previous studies revealed the interference of selective emotion processing when attentional resources were directed to locations of explicitly task-relevant stimuli. The present data suggest

  12. Selection Criteria in Regime Switching Conditional Volatility Models

    Directory of Open Access Journals (Sweden)

    Thomas Chuffart

    2015-05-01

    Full Text Available A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.

  13. Selection of refractory materials for pyrochemical processing

    International Nuclear Information System (INIS)

    Axler, K.M.; DePoorter, G.L.; Bagaasen, L.M.

    1991-01-01

    Several pyrochemical processing operations require containment materials that exhibit minimal chemical interactions with the system, good thermal shock resistance, and reusability. One example is Direct Oxide Reduction (DOR). DOR involves the conversion of PuO 2 to metal by an oxidation/reduction reaction with Ca metal. The reaction proceeds within a molten salt flux at temperatures above 800C. A combination of thermodynamics, system thermodynamic modeling, and experimental investigations are in use to select and evaluate potential containment materials

  14. Parameter identification in multinomial processing tree models

    NARCIS (Netherlands)

    Schmittmann, V.D.; Dolan, C.V.; Raijmakers, M.E.J.; Batchelder, W.H.

    2010-01-01

    Multinomial processing tree models form a popular class of statistical models for categorical data that have applications in various areas of psychological research. As in all statistical models, establishing which parameters are identified is necessary for model inference and selection on the basis

  15. Laser Process for Selective Emitter Silicon Solar Cells

    Directory of Open Access Journals (Sweden)

    G. Poulain

    2012-01-01

    Full Text Available Selective emitter solar cells can provide a significant increase in conversion efficiency. However current approaches need many technological steps and alignment procedures. This paper reports on a preliminary attempt to reduce the number of processing steps and therefore the cost of selective emitter cells. In the developed procedure, a phosphorous glass covered with silicon nitride acts as the doping source. A laser is used to open locally the antireflection coating and at the same time achieve local phosphorus diffusion. In this process the standard chemical etching of the phosphorous glass is avoided. Sheet resistance variation from 100 Ω/sq to 40 Ω/sq is demonstrated with a nanosecond UV laser. Numerical simulation of the laser-matter interaction is discussed to understand the dopant diffusion efficiency. Preliminary solar cells results show a 0.5% improvement compared with a homogeneous emitter structure.

  16. Selection Process for New Windows | Efficient Windows Collaborative

    Science.gov (United States)

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  17. Selection Process for Replacement Windows | Efficient Windows Collaborative

    Science.gov (United States)

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  18. Sexual selection: Another Darwinian process.

    Science.gov (United States)

    Gayon, Jean

    2010-02-01

    the Darwin-Wallace controversy was that most Darwinian biologists avoided the subject of sexual selection until at least the 1950s, Ronald Fisher being a major exception. This controversy still deserves attention from modern evolutionary biologists, because the modern approach inherits from both Darwin and Wallace. The modern approach tends to present sexual selection as a special aspect of the theory of natural selection, although it also recognizes the big difficulties resulting from the inevitable interaction between these two natural processes of selection. And contra Wallace, it considers mate choice as a major process that deserves a proper evolutionary treatment. The paper's conclusion explains why sexual selection can be taken as a test case for a proper assessment of "Darwinism" as a scientific tradition. Darwin's and Wallace's attitudes towards sexual selection reveal two different interpretations of the principle of natural selection: Wallace's had an environmentalist conception of natural selection, whereas Darwin was primarily sensitive to the element of competition involved in the intimate mechanism of any natural process of selection. Sexual selection, which can lack adaptive significance, reveals this exemplarily. 2010 Académie des sciences. Published by Elsevier SAS. All rights reserved.

  19. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2015-01-01

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman's two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  20. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  1. Selection of Activities in Dynamic Business Process Simulation

    Directory of Open Access Journals (Sweden)

    Toma Rusinaitė

    2016-06-01

    Full Text Available Maintaining dynamicity of business processes is one of the core issues of today's business as it enables businesses to adapt to constantly changing environment. Upon changing the processes, it is vital to assess possible impact, which is achieved by using simulation of dynamic processes. In order to implement dynamicity in business processes, it is necessary to have an ability to change components of the process (a set of activities, a content of activity, a set of activity sequences, a set of rules, performers and resources or dynamically select them during execution. This problem attracted attention of researches over the past few years; however, there is no proposed solution, which ensures the business process (BP dynamicity. This paper proposes and specifies dynamic business process (DBP simulation model, which satisfies all of the formulated DBP requirements.

  2. Implementation of the Business Process Modelling Notation (BPMN) in the modelling of anatomic pathology processes.

    Science.gov (United States)

    Rojo, Marcial García; Rolón, Elvira; Calahorra, Luis; García, Felix Oscar; Sánchez, Rosario Paloma; Ruiz, Francisco; Ballester, Nieves; Armenteros, María; Rodríguez, Teresa; Espartero, Rafael Martín

    2008-07-15

    Process orientation is one of the essential elements of quality management systems, including those in use in healthcare. Business processes in hospitals are very complex and variable. BPMN (Business Process Modelling Notation) is a user-oriented language specifically designed for the modelling of business (organizational) processes. Previous experiences of the use of this notation in the processes modelling within the Pathology in Spain or another country are not known. We present our experience in the elaboration of the conceptual models of Pathology processes, as part of a global programmed surgical patient process, using BPMN. With the objective of analyzing the use of BPMN notation in real cases, a multidisciplinary work group was created, including software engineers from the Dep. of Technologies and Information Systems from the University of Castilla-La Mancha and health professionals and administrative staff from the Hospital General de Ciudad Real. The work in collaboration was carried out in six phases: informative meetings, intensive training, process selection, definition of the work method, process describing by hospital experts, and process modelling. The modelling of the processes of Anatomic Pathology is presented using BPMN. The presented subprocesses are those corresponding to the surgical pathology examination of the samples coming from operating theatre, including the planning and realization of frozen studies. The modelling of Anatomic Pathology subprocesses has allowed the creation of an understandable graphical model, where management and improvements are more easily implemented by health professionals.

  3. Attentional selection of relative SF mediates global versus local processing: evidence from EEG.

    Science.gov (United States)

    Flevaris, Anastasia V; Bentin, Shlomo; Robertson, Lynn C

    2011-06-13

    Previous research on functional hemispheric differences in visual processing has associated global perception with low spatial frequency (LSF) processing biases of the right hemisphere (RH) and local perception with high spatial frequency (HSF) processing biases of the left hemisphere (LH). The Double Filtering by Frequency (DFF) theory expanded this hypothesis by proposing that visual attention selects and is directed to relatively LSFs by the RH and relatively HSFs by the LH, suggesting a direct causal relationship between SF selection and global versus local perception. We tested this idea in the current experiment by comparing activity in the EEG recorded at posterior right and posterior left hemisphere sites while participants' attention was directed to global or local levels of processing after selection of relatively LSFs versus HSFs in a previous stimulus. Hemispheric asymmetry in the alpha band (8-12 Hz) during preparation for global versus local processing was modulated by the selected SF. In contrast, preparatory activity associated with selection of SF was not modulated by the previously attended level (global/local). These results support the DFF theory that top-down attentional selection of SF mediates global and local processing.

  4. Automating an integrated spatial data-mining model for landfill site selection

    Science.gov (United States)

    Abujayyab, Sohaib K. M.; Ahamad, Mohd Sanusi S.; Yahya, Ahmad Shukri; Ahmad, Siti Zubaidah; Aziz, Hamidi Abdul

    2017-10-01

    An integrated programming environment represents a robust approach to building a valid model for landfill site selection. One of the main challenges in the integrated model is the complicated processing and modelling due to the programming stages and several limitations. An automation process helps avoid the limitations and improve the interoperability between integrated programming environments. This work targets the automation of a spatial data-mining model for landfill site selection by integrating between spatial programming environment (Python-ArcGIS) and non-spatial environment (MATLAB). The model was constructed using neural networks and is divided into nine stages distributed between Matlab and Python-ArcGIS. A case study was taken from the north part of Peninsular Malaysia. 22 criteria were selected to utilise as input data and to build the training and testing datasets. The outcomes show a high-performance accuracy percentage of 98.2% in the testing dataset using 10-fold cross validation. The automated spatial data mining model provides a solid platform for decision makers to performing landfill site selection and planning operations on a regional scale.

  5. Waste water processing technology for Space Station Freedom - Comparative test data analysis

    Science.gov (United States)

    Miernik, Janie H.; Shah, Burt H.; Mcgriff, Cindy F.

    1991-01-01

    Comparative tests were conducted to choose the optimum technology for waste water processing on SSF. A thermoelectric integrated membrane evaporation (TIMES) subsystem and a vapor compression distillation subsystem (VCD) were built and tested to compare urine processing capability. Water quality, performance, and specific energy were compared for conceptual designs intended to function as part of the water recovery and management system of SSF. The VCD is considered the most mature and efficient technology and was selected to replace the TIMES as the baseline urine processor for SSF.

  6. Fermentation process tracking through enhanced spectral calibration modeling.

    Science.gov (United States)

    Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah

    2007-06-15

    The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.

  7. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  8. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    OpenAIRE

    Ma, Hui-qiang

    2014-01-01

    We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...

  9. Effect of Model Selection on Computed Water Balance Components

    NARCIS (Netherlands)

    Jhorar, R.K.; Smit, A.A.M.F.R.; Roest, C.W.J.

    2009-01-01

    Soil water flow modelling approaches as used in four selected on-farm water management models, namely CROPWAT. FAIDS, CERES and SWAP, are compared through numerical experiments. The soil water simulation approaches used in the first three models are reformulated to incorporate ail evapotranspiration

  10. Effect of processing on iodine content of some selected plants food ...

    African Journals Online (AJOL)

    Effect of processing on iodine content of some selected plants food was investigated. Results show significant reduction (p < 0.05) in the iodine content of the processed food compared with the raw forms. The iodine value of 658.60 ± 17.2 ìg/100g observed in raw edible portion of Discorea rotundata was significantly higher ...

  11. Mindfulness training alters emotional memory recall compared to active controls: support for an emotional information processing model of mindfulness.

    Science.gov (United States)

    Roberts-Wolfe, Douglas; Sacchet, Matthew D; Hastings, Elizabeth; Roth, Harold; Britton, Willoughby

    2012-01-01

    While mindfulness-based interventions have received widespread application in both clinical and non-clinical populations, the mechanism by which mindfulness meditation improves well-being remains elusive. One possibility is that mindfulness training alters the processing of emotional information, similar to prevailing cognitive models of depression and anxiety. The aim of this study was to investigate the effects of mindfulness training on emotional information processing (i.e., memory) biases in relation to both clinical symptomatology and well-being in comparison to active control conditions. Fifty-eight university students (28 female, age = 20.1 ± 2.7 years) participated in either a 12-week course containing a "meditation laboratory" or an active control course with similar content or experiential practice laboratory format (music). Participants completed an emotional word recall task and self-report questionnaires of well-being and clinical symptoms before and after the 12-week course. Meditators showed greater increases in positive word recall compared to controls [F(1, 56) = 6.6, p = 0.02]. The meditation group increased significantly more on measures of well-being [F(1, 56) = 6.6, p = 0.01], with a marginal decrease in depression and anxiety [F(1, 56) = 3.0, p = 0.09] compared to controls. Increased positive word recall was associated with increased psychological well-being (r = 0.31, p = 0.02) and decreased clinical symptoms (r = -0.29, p = 0.03). Mindfulness training was associated with greater improvements in processing efficiency for positively valenced stimuli than active control conditions. This change in emotional information processing was associated with improvements in psychological well-being and less depression and anxiety. These data suggest that mindfulness training may improve well-being via changes in emotional information processing. Future research with a fully randomized design will be

  12. On selecting a prior for the precision parameter of Dirichlet process mixture models

    Science.gov (United States)

    Dorazio, R.M.

    2009-01-01

    In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.

  13. 45 CFR 2400.31 - Selection process.

    Science.gov (United States)

    2010-10-01

    ... FELLOWSHIP PROGRAM REQUIREMENTS Selection of Fellows § 2400.31 Selection process. (a) An independent Fellow... outstanding applicants from each state for James Madison Fellowships. (b) From among candidates recommended...

  14. Point, surface and volumetric heat sources in the thermal modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    Selective laser melting (SLM) is a powder based additive manufacturing technique suitable for producing high precision metal parts. However, distortions and residual stresses within products arise during SLM because of the high temperature gradients created by the laser heating. Residual stresses limit the load resistance of the product and may even lead to fracture during the built process. It is therefore of paramount importance to predict the level of part distortion and residual stress as a function of SLM process parameters which requires a reliable thermal modelling of the SLM process. Consequently, a key question arises which is how to describe the laser source appropriately. Reasonable simplification of the laser representation is crucial for the computational efficiency of the thermal model of the SLM process. In this paper, first a semi-analytical thermal modelling approach is described. Subsequently, the laser heating is modelled using point, surface and volumetric sources, in order to compare the influence of different laser source geometries on the thermal history prediction of the thermal model. The present work provides guidelines on appropriate representation of the laser source in the thermal modelling of the SLM process.

  15. Goal selection versus process control while learning to use a brain-computer interface

    Science.gov (United States)

    Royer, Audrey S.; Rose, Minn L.; He, Bin

    2011-06-01

    A brain-computer interface (BCI) can be used to accomplish a task without requiring motor output. Two major control strategies used by BCIs during task completion are process control and goal selection. In process control, the user exerts continuous control and independently executes the given task. In goal selection, the user communicates their goal to the BCI and then receives assistance executing the task. A previous study has shown that goal selection is more accurate and faster in use. An unanswered question is, which control strategy is easier to learn? This study directly compares goal selection and process control while learning to use a sensorimotor rhythm-based BCI. Twenty young healthy human subjects were randomly assigned either to a goal selection or a process control-based paradigm for eight sessions. At the end of the study, the best user from each paradigm completed two additional sessions using all paradigms randomly mixed. The results of this study were that goal selection required a shorter training period for increased speed, accuracy, and information transfer over process control. These results held for the best subjects as well as in the general subject population. The demonstrated characteristics of goal selection make it a promising option to increase the utility of BCIs intended for both disabled and able-bodied users.

  16. The Formalization of the Business Process Modeling Goals

    Directory of Open Access Journals (Sweden)

    Ligita Bušinska

    2016-10-01

    Full Text Available In business process modeling the de facto standard BPMN has emerged. However, the applications of this notation have many subsets of elements and various extensions. Also, BPMN still coincides with many other modeling languages, forming a large set of available options for business process modeling languages and dialects. While, in general, the goal of modelers is a central notion in the choice of modeling languages and notations, in most researches that propose guidelines, techniques, and methods for business process modeling language evaluation and/or selection, the business process modeling goal is not formalized and not transparently taken into account. To overcome this gap, and to explicate and help to handle business process modeling complexity, the approach to formalize the business process modeling goal, and the supporting three dimensional business process modeling framework, are proposed.

  17. THM-coupled modeling of selected processes in argillaceous rock relevant to rock mechanics

    International Nuclear Information System (INIS)

    Czaikowski, Oliver

    2012-01-01

    Scientific investigations in European countries other than Germany concentrate not only on granite formations (Switzerland, Sweden) but also on argillaceous rock formations (France, Switzerland, Belgium) to assess their suitability as host and barrier rock for the final storage of radioactive waste. In Germany, rock salt has been under thorough study as a host rock over the past few decades. According to a study by the German Federal Institute for Geosciences and Natural Resources, however, not only salt deposits but also argillaceous rock deposits are available at relevant depths and of extensions in space which make final storage of high-level radioactive waste basically possible in Germany. Equally qualified findings about the suitability/unsuitability of non-saline rock formations require fundamental studies to be conducted nationally because of the comparatively low level of knowledge. The article presents basic analyses of coupled mechanical and hydraulic properties of argillaceous rock formations as host rock for a repository. The interaction of various processes is explained on the basis of knowledge derived from laboratory studies, and open problems are deduced. For modeling coupled processes, a simplified analytical computation method is proposed and compared with the results of numerical simulations, and the limits to its application are outlined. (orig.)

  18. Development of Solar Drying Model for Selected Cambodian Fish Species

    Science.gov (United States)

    Hubackova, Anna; Kucerova, Iva; Chrun, Rithy; Chaloupkova, Petra; Banout, Jan

    2014-01-01

    A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6°C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg·h−1. Based on coefficient of determination (R 2), chi-square (χ 2) test, and root-mean-square error (RMSE), the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing. PMID:25250381

  19. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    Science.gov (United States)

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  20. Lithium-ion battery models: a comparative study and a model-based powerline communication

    Directory of Open Access Journals (Sweden)

    F. Saidani

    2017-09-01

    Full Text Available In this work, various Lithium-ion (Li-ion battery models are evaluated according to their accuracy, complexity and physical interpretability. An initial classification into physical, empirical and abstract models is introduced. Also known as white, black and grey boxes, respectively, the nature and characteristics of these model types are compared. Since the Li-ion battery cell is a thermo-electro-chemical system, the models are either in the thermal or in the electrochemical state-space. Physical models attempt to capture key features of the physical process inside the cell. Empirical models describe the system with empirical parameters offering poor analytical, whereas abstract models provide an alternative representation. In addition, a model selection guideline is proposed based on applications and design requirements. A complex model with a detailed analytical insight is of use for battery designers but impractical for real-time applications and in situ diagnosis. In automotive applications, an abstract model reproducing the battery behavior in an equivalent but more practical form, mainly as an equivalent circuit diagram, is recommended for the purpose of battery management. As a general rule, a trade-off should be reached between the high fidelity and the computational feasibility. Especially if the model is embedded in a real-time monitoring unit such as a microprocessor or a FPGA, the calculation time and memory requirements rise dramatically with a higher number of parameters. Moreover, examples of equivalent circuit models of Lithium-ion batteries are covered. Equivalent circuit topologies are introduced and compared according to the previously introduced criteria. An experimental sequence to model a 20 Ah cell is presented and the results are used for the purposes of powerline communication.

  1. Comparative assessment of condensation models for horizontal tubes

    International Nuclear Information System (INIS)

    Schaffrath, A.; Kruessenberg, A.K.; Lischke, W.; Gocht, U.; Fjodorow, A.

    1999-01-01

    The condensation in horizontal tubes plays an important role e.g. for the determination of the operation mode of horizontal steam generators of VVER reactors or passive safety systems for the next generation of nuclear power plants. Two different approaches (HOTKON and KONWAR) for modeling this process have been undertaken by Forschungszentrum Juelich (FZJ) and University for Applied Sciences Zittau/Goerlitz (HTWS) and implemented into the 1D-thermohydraulic code ATHLET, which is developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH for the analysis of anticipated and abnormal transients in light water reactors. Although the improvements of the condensation models are developed for different applications (VVER steam generators - emergency condenser of the SWR1000) with strongly different operation conditions (e.g. the temperature difference over the tube wall in HORUS is up to 30 K and in NOKO up to 250 K, the heat flux density in HORUS is up to 40 kW/m 2 and in NOKO up to 1 GW/m 2 ) both models are now compared and assessed by Forschungszentrum Rossendorf FZR e.V. Therefore, post test calculations of selected HORUS experiments were performed with ATHLET/KONWAR and compared to existing ATHLET and ATHLET/HOTKON calculations of HTWS. It can be seen that the calculations with the extension KONWAR as well as HOTKON improve significantly the agreement between computational and experimental data. (orig.) [de

  2. The site selection process

    International Nuclear Information System (INIS)

    Kittel, J.H.

    1989-01-01

    One of the most arduous tasks associated with the management of radioactive wastes is the siting of new disposal facilities. Experience has shown that the performance of the disposal facility during and after disposal operations is critically dependent on the characteristics of the site itself. The site selection process consists of defining needs and objectives, identifying geographic regions of interest, screening and selecting candidate sites, collecting data on the candidate sites, and finally selecting the preferred site. Before the site selection procedures can be implemented, however, a formal legal system must be in place that defines broad objectives and, most importantly, clearly establishes responsibilities and accompanying authorities for the decision-making steps in the procedure. Site selection authorities should make every effort to develop trust and credibility with the public, local officials, and the news media. The responsibilities of supporting agencies must also be spelled out. Finally, a stable funding arrangement must be established so that activities such as data collection can proceed without interruption. Several examples, both international and within the US, are given

  3. Using Deep Learning for Targeted Data Selection, Improving Satellite Observation Utilization for Model Initialization

    Science.gov (United States)

    Lee, Y. J.; Bonfanti, C. E.; Trailovic, L.; Etherton, B.; Govett, M.; Stewart, J.

    2017-12-01

    At present, a fraction of all satellite observations are ultimately used for model assimilation. The satellite data assimilation process is computationally expensive and data are often reduced in resolution to allow timely incorporation into the forecast. This problem is only exacerbated by the recent launch of Geostationary Operational Environmental Satellite (GOES)-16 satellite and future satellites providing several order of magnitude increase in data volume. At the NOAA Earth System Research Laboratory (ESRL) we are researching the use of machine learning the improve the initial selection of satellite data to be used in the model assimilation process. In particular, we are investigating the use of deep learning. Deep learning is being applied to many image processing and computer vision problems with great success. Through our research, we are using convolutional neural network to find and mark regions of interest (ROI) to lead to intelligent extraction of observations from satellite observation systems. These targeted observations will be used to improve the quality of data selected for model assimilation and ultimately improve the impact of satellite data on weather forecasts. Our preliminary efforts to identify the ROI's are focused in two areas: applying and comparing state-of-art convolutional neural network models using the analysis data from the National Center for Environmental Prediction (NCEP) Global Forecast System (GFS) weather model, and using these results as a starting point to optimize convolution neural network model for pattern recognition on the higher resolution water vapor data from GOES-WEST and other satellite. This presentation will provide an introduction to our convolutional neural network model to identify and process these ROI's, along with the challenges of data preparation, training the model, and parameter optimization.

  4. Additive Manufacturing Processes: Selective Laser Melting, Electron Beam Melting and Binder Jetting-Selection Guidelines.

    Science.gov (United States)

    Gokuldoss, Prashanth Konda; Kolla, Sri; Eckert, Jürgen

    2017-06-19

    Additive manufacturing (AM), also known as 3D printing or rapid prototyping, is gaining increasing attention due to its ability to produce parts with added functionality and increased complexities in geometrical design, on top of the fact that it is theoretically possible to produce any shape without limitations. However, most of the research on additive manufacturing techniques are focused on the development of materials/process parameters/products design with different additive manufacturing processes such as selective laser melting, electron beam melting, or binder jetting. However, we do not have any guidelines that discuss the selection of the most suitable additive manufacturing process, depending on the material to be processed, the complexity of the parts to be produced, or the design considerations. Considering the very fact that no reports deal with this process selection, the present manuscript aims to discuss the different selection criteria that are to be considered, in order to select the best AM process (binder jetting/selective laser melting/electron beam melting) for fabricating a specific component with a defined set of material properties.

  5. Additive Manufacturing Processes: Selective Laser Melting, Electron Beam Melting and Binder Jetting—Selection Guidelines

    Science.gov (United States)

    Konda Gokuldoss, Prashanth; Kolla, Sri; Eckert, Jürgen

    2017-01-01

    Additive manufacturing (AM), also known as 3D printing or rapid prototyping, is gaining increasing attention due to its ability to produce parts with added functionality and increased complexities in geometrical design, on top of the fact that it is theoretically possible to produce any shape without limitations. However, most of the research on additive manufacturing techniques are focused on the development of materials/process parameters/products design with different additive manufacturing processes such as selective laser melting, electron beam melting, or binder jetting. However, we do not have any guidelines that discuss the selection of the most suitable additive manufacturing process, depending on the material to be processed, the complexity of the parts to be produced, or the design considerations. Considering the very fact that no reports deal with this process selection, the present manuscript aims to discuss the different selection criteria that are to be considered, in order to select the best AM process (binder jetting/selective laser melting/electron beam melting) for fabricating a specific component with a defined set of material properties. PMID:28773031

  6. Traditional versus commercial food processing techniques - A comparative study based on chemical analysis of selected foods consumed in rural Zimbabwe.

    Directory of Open Access Journals (Sweden)

    Abraham I. C. Mwadiwa

    2012-01-01

    Full Text Available With the advent of industrialisation, food processors are constantly looking for ways to cut costs, increase production and maximise profits at the expense of quality. Commercial food processors have since shifted their focus from endogenous ways of processing food to more profitable commercial food processing techniques. The aim of this study was to investigate the holistic impact of commercial food processing techniques on nutrition by comparing commercially (industrially processed food products and endogenously processed food products through chemical analysis of selected foods. Eight food samples which included commercially processed peanut butter, mealie-meal, dried vegetables (mufushwa and rice and endogenously processed peanut butter, mealie-meal, dried vegetables (mufushwa and rice were randomly sampled from rural communities in the south-eastern and central provinces of Zimbabwe. They were analysed for ash, zinc, iron, copper, magnesium, protein, fat, carbohydrates, energy, crude fibre, vitamin C and moisture contents. The results of chemical analysis indicate that endogenously processed mealie-meal, dried vegetables and rice contained higher ash values of 2.00g/100g, 17.83g/100g, and 3.28g/100g respectively than commercially processed mealie-meal, dried vegetables and rice, which had ash values of 1.56g/100g, 15.25g/100g and 1.46g/100g respectively. The results also show that endogenously processed foods have correspondingly higher iron, zinc and magnesium contents and, on the whole, a higher protein content. The results also indicate that commercially processed foods have higher fat and energy contents. The result led to the conclusion that the foods are likely to pose a higher risk of causing adverse conditions to health, such as obesity and cardiovascular diseases to susceptible individuals. Based on these findings, it can, therefore, be concluded that endogenously processed foods have a better nutrient value and health implications

  7. The MCDM Model for Personnel Selection Based on SWARA and ARAS Methods

    Directory of Open Access Journals (Sweden)

    Darjan Karabasevic

    2015-05-01

    Full Text Available Competent employees are the key resource in an organization for achieving success and, therefore, competitiveness on the market. The aim of the recruitment and selection process is to acquire personnel with certain competencies required for a particular position, i.e.,a position within the company. Bearing in mind the fact that in the process of decision making decision-makers have underused the methods of making decisions, this paper aims to establish an MCDM model for the evaluation and selection of candidates in the process of the recruitment and selection of personnel based on the SWARA and the ARAS methods. Apart from providing an MCDM model, the paper will additionally provide a set of evaluation criteria for the position of a sales manager (the middle management in the telecommunication industry which will also be used in the numerical example. On the basis of a numerical example, in the process of employment, theproposed MCDMmodel can be successfully usedin selecting candidates.

  8. Modeling of the thermal physical process and study on the reliability of linear energy density for selective laser melting

    Science.gov (United States)

    Xiang, Zhaowei; Yin, Ming; Dong, Guanhua; Mei, Xiaoqin; Yin, Guofu

    2018-06-01

    A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM) is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM.

  9. Selective hydrogenation processes in steam cracking

    Energy Technology Data Exchange (ETDEWEB)

    Bender, M.; Schroeter, M.K.; Hinrichs, M.; Makarczyk, P. [BASF SE, Ludwigshafen (Germany)

    2010-12-30

    Hydrogen is the key elixir used to trim the quality of olefinic and aromatic product slates from steam crackers. Being co-produced in excess amounts in the thermal cracking process a small part of the hydrogen is consumed in the ''cold part'' of a steam cracker to selectively hydrogenate unwanted, unsaturated hydrocarbons. The compositions of the various steam cracker product streams are adjusted by these processes to the outlet specifications. This presentation gives an overview over state-of-art selective hydrogenation technologies available from BASF for these processes. (Published in summary form only) (orig.)

  10. Engineering development of selective agglomeration: Task 5, Bench- scale process testing

    Energy Technology Data Exchange (ETDEWEB)

    1991-09-01

    Under the overall objectives of DOE Contract ``Engineering Development of Selective Agglomeration,`` there were a number of specific objectives in the Task 5 program. The prime objectives of Task 5 are highlighted below: (1) Maximize process performance in pyritic sulfur rejection and BTU recovery, (2) Produce a low ash product, (3) Compare the performance of the heavy agglomerant process based on diesel and the light agglomerant process using heptane, (4) Define optimum processing conditions for engineering design, (5) Provide first-level evaluation of product handleability, and (6) Explore and investigate process options/ideas which may enhance process performance and/or product handleability.

  11. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-09-10

    Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer ground-water flow models; to conduct performance assessments; and to develop performance assessment models, where necessary. In the area of scientific modeling, the M&O CRWMS has the following responsibilities: To provide overall management and integration of modeling activities. To provide a framework for focusing modeling and model development. To identify areas that require increased or decreased emphasis. To ensure that the tools necessary to conduct performance assessment are available. These responsibilities are being initiated through a three-step process. It consists of a thorough review of existing models, testing of models which best fit the established requirements, and making recommendations for future development that should be conducted. Future model enhancement will then focus on the models selected during this activity. Furthermore, in order to manage future model development, particularly in those areas requiring substantial enhancement, the three-step process will be updated and reported periodically in the future.

  12. Analytical network process based optimum cluster head selection in wireless sensor network.

    Science.gov (United States)

    Farman, Haleem; Javed, Huma; Jan, Bilal; Ahmad, Jamil; Ali, Shaukat; Khalil, Falak Naz; Khan, Murad

    2017-01-01

    Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of

  13. Development of Physics-Based Numerical Models for Uncertainty Quantification of Selective Laser Melting Processes

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of the proposed research is to characterize the influence of process parameter variability inherent to Selective Laser Melting (SLM) and performance effect...

  14. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Directory of Open Access Journals (Sweden)

    Nikola Simidjievski

    Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  15. Modeling selective attention using a neuromorphic analog VLSI device.

    Science.gov (United States)

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  16. Behavioral optimization models for multicriteria portfolio selection

    Directory of Open Access Journals (Sweden)

    Mehlawat Mukesh Kumar

    2013-01-01

    Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.

  17. HOW DO STUDENTS SELECT SOCIAL NETWORKING SITES? AN ANALYTIC HIERARCHY PROCESS (AHP MODEL

    Directory of Open Access Journals (Sweden)

    Chun Meng Tang

    2015-12-01

    Full Text Available Social networking sites are popular among university students, and students today are indeed spoiled for choice. New emerging social networking sites sprout up amid popular sites, while some existing ones die out. Given the choice of so many social networking sites, how do students decide which one they will sign up for and stay on as an active user? The answer to this question is of interest to social networking site designers and marketers. The market of social networking sites is highly competitive. To maintain the current user base and continue to attract new users, how should social networking sites design their sites? Marketers spend a fairly large percent of their marketing budget on social media marketing. To formulate an effective social media strategy, how much do marketers understand the users of social networking sites? Learning from website evaluation studies, this study intends to provide some answers to these questions by examining how university students decide between two popular social networking sites, Facebook and Twitter. We first developed an analytic hierarchy process (AHP model of four main selection criteria and 12 sub-criteria, and then administered a questionnaire to a group of university students attending a course at a Malaysian university. AHP analyses of the responses from 12 respondents provided an insight into the decision-making process involved in students’ selection of social networking sites. It seemed that of the four main criteria, privacy was the top concern, followed by functionality, usability, and content. The sub-criteria that were of key concern to the students were apps, revenue-generating opportunities, ease of use, and information security. Between Facebook and Twitter, the students thought that Facebook was the better choice. This information is useful for social networking site designers to design sites that are more relevant to their users’ needs, and for marketers to craft more effective

  18. Waste package materials selection process

    International Nuclear Information System (INIS)

    Roy, A.K.; Fish, R.L.; McCright, R.D.

    1994-01-01

    The office of Civilian Radioactive Waste Management (OCRWM) of the United States Department of Energy (USDOE) is evaluating a site at Yucca Mountain in Southern Nevada to determine its suitability as a mined geologic disposal system (MGDS) for the disposal of high-level nuclear waste (HLW). The B ampersand W Fuel Company (BWFC), as a part of the Management and Operating (M ampersand O) team in support of the Yucca Mountain Site Characterization Project (YMP), is responsible for designing and developing the waste package for this potential repository. As part of this effort, Lawrence Livermore National Laboratory (LLNL) is responsible for testing materials and developing models for the materials to be used in the waste package. This paper is aimed at presenting the selection process for materials needed in fabricating the different components of the waste package

  19. Applying the Business Process and Practice Alignment Meta-model: Daily Practices and Process Modelling

    Directory of Open Access Journals (Sweden)

    Ventura Martins Paula

    2017-03-01

    Full Text Available Background: Business Process Modelling (BPM is one of the most important phases of information system design. Business Process (BP meta-models allow capturing informational and behavioural aspects of business processes. Unfortunately, standard BP meta-modelling approaches focus just on process description, providing different BP models. It is not possible to compare and identify related daily practices in order to improve BP models. This lack of information implies that further research in BP meta-models is needed to reflect the evolution/change in BP. Considering this limitation, this paper introduces a new BP meta-model designed by Business Process and Practice Alignment Meta-model (BPPAMeta-model. Our intention is to present a meta-model that addresses features related to the alignment between daily work practices and BP descriptions. Objectives: This paper intends to present a metamodel which is going to integrate daily work information into coherent and sound process definitions. Methods/Approach: The methodology employed in the research follows a design-science approach. Results: The results of the case study are related to the application of the proposed meta-model to align the specification of a BP model with work practices models. Conclusions: This meta-model can be used within the BPPAM methodology to specify or improve business processes models based on work practice descriptions.

  20. Comparative Analysis of Site-Selection Process for Power Plants in Korea: Cases of Thermal, Nuclear, and Renewable Energies

    International Nuclear Information System (INIS)

    Kang, M.; Lee, M.; Yoon, J. W.; Choi, H. C.; Chu, C.; Lee, H.; Park, J.

    2017-01-01

    There are various conflicts related to power generation facilities; however, the conflicts that arise during the process of luring facilities or site selection, as in the previous cases, can eventually influence greatly the implementation of the national energy policy or strategy. This study analyzed the conflict phenomenon that occurred in the site selection policy of the power generation facilities through the case studies. We selected the most recent conflict cases by each energy source, identified the qualitative context characteristics of the cases and tried to suggest the policy leverages. In this study, it is concluded that the cause of conflicts in decision making system for site selection of power plants is insufficient yet due to the variable circumstances such as environmental events, stakeholder range, etc. However, the conclusions obtained from the case study are difficult generalization without specific prescription books, so further studies for those areas are required.

  1. Selecting an optimal mixed products using grey relationship model

    Directory of Open Access Journals (Sweden)

    Farshad Faezy Razi

    2013-06-01

    Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.

  2. On the selection of ordinary differential equation models with application to predator-prey dynamical models.

    Science.gov (United States)

    Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

    2015-03-01

    We consider model selection and estimation in a context where there are competing ordinary differential equation (ODE) models, and all the models are special cases of a "full" model. We propose a computationally inexpensive approach that employs statistical estimation of the full model, followed by a combination of a least squares approximation (LSA) and the adaptive Lasso. We show the resulting method, here called the LSA method, to be an (asymptotically) oracle model selection method. The finite sample performance of the proposed LSA method is investigated with Monte Carlo simulations, in which we examine the percentage of selecting true ODE models, the efficiency of the parameter estimation compared to simply using the full and true models, and coverage probabilities of the estimated confidence intervals for ODE parameters, all of which have satisfactory performances. Our method is also demonstrated by selecting the best predator-prey ODE to model a lynx and hare population dynamical system among some well-known and biologically interpretable ODE models. © 2014, The International Biometric Society.

  3. Development of Solar Drying Model for Selected Cambodian Fish Species

    Directory of Open Access Journals (Sweden)

    Anna Hubackova

    2014-01-01

    Full Text Available A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6°C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg·h−1. Based on coefficient of determination (R2, chi-square (χ2 test, and root-mean-square error (RMSE, the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing.

  4. Multi-Criteria Decision Making For Determining A Simple Model of Supplier Selection

    Science.gov (United States)

    Harwati

    2017-06-01

    Supplier selection is a decision with many criteria. Supplier selection model usually involves more than five main criteria and more than 10 sub-criteria. In fact many model includes more than 20 criteria. Too many criteria involved in supplier selection models sometimes make it difficult to apply in many companies. This research focuses on designing supplier selection that easy and simple to be applied in the company. Analytical Hierarchy Process (AHP) is used to weighting criteria. The analysis results there are four criteria that are easy and simple can be used to select suppliers: Price (weight 0.4) shipment (weight 0.3), quality (weight 0.2) and services (weight 0.1). A real case simulation shows that simple model provides the same decision with a more complex model.

  5. Modeling selective pressures on phytoplankton in the global ocean.

    Directory of Open Access Journals (Sweden)

    Jason G Bragg

    Full Text Available Our view of marine microbes is transforming, as culture-independent methods facilitate rapid characterization of microbial diversity. It is difficult to assimilate this information into our understanding of marine microbe ecology and evolution, because their distributions, traits, and genomes are shaped by forces that are complex and dynamic. Here we incorporate diverse forces--physical, biogeochemical, ecological, and mutational--into a global ocean model to study selective pressures on a simple trait in a widely distributed lineage of picophytoplankton: the nitrogen use abilities of Synechococcus and Prochlorococcus cyanobacteria. Some Prochlorococcus ecotypes have lost the ability to use nitrate, whereas their close relatives, marine Synechococcus, typically retain it. We impose mutations for the loss of nitrogen use abilities in modeled picophytoplankton, and ask: in which parts of the ocean are mutants most disadvantaged by losing the ability to use nitrate, and in which parts are they least disadvantaged? Our model predicts that this selective disadvantage is smallest for picophytoplankton that live in tropical regions where Prochlorococcus are abundant in the real ocean. Conversely, the selective disadvantage of losing the ability to use nitrate is larger for modeled picophytoplankton that live at higher latitudes, where Synechococcus are abundant. In regions where we expect Prochlorococcus and Synechococcus populations to cycle seasonally in the real ocean, we find that model ecotypes with seasonal population dynamics similar to Prochlorococcus are less disadvantaged by losing the ability to use nitrate than model ecotypes with seasonal population dynamics similar to Synechococcus. The model predictions for the selective advantage associated with nitrate use are broadly consistent with the distribution of this ability among marine picocyanobacteria, and at finer scales, can provide insights into interactions between temporally varying

  6. Modeling selective pressures on phytoplankton in the global ocean.

    Science.gov (United States)

    Bragg, Jason G; Dutkiewicz, Stephanie; Jahn, Oliver; Follows, Michael J; Chisholm, Sallie W

    2010-03-10

    Our view of marine microbes is transforming, as culture-independent methods facilitate rapid characterization of microbial diversity. It is difficult to assimilate this information into our understanding of marine microbe ecology and evolution, because their distributions, traits, and genomes are shaped by forces that are complex and dynamic. Here we incorporate diverse forces--physical, biogeochemical, ecological, and mutational--into a global ocean model to study selective pressures on a simple trait in a widely distributed lineage of picophytoplankton: the nitrogen use abilities of Synechococcus and Prochlorococcus cyanobacteria. Some Prochlorococcus ecotypes have lost the ability to use nitrate, whereas their close relatives, marine Synechococcus, typically retain it. We impose mutations for the loss of nitrogen use abilities in modeled picophytoplankton, and ask: in which parts of the ocean are mutants most disadvantaged by losing the ability to use nitrate, and in which parts are they least disadvantaged? Our model predicts that this selective disadvantage is smallest for picophytoplankton that live in tropical regions where Prochlorococcus are abundant in the real ocean. Conversely, the selective disadvantage of losing the ability to use nitrate is larger for modeled picophytoplankton that live at higher latitudes, where Synechococcus are abundant. In regions where we expect Prochlorococcus and Synechococcus populations to cycle seasonally in the real ocean, we find that model ecotypes with seasonal population dynamics similar to Prochlorococcus are less disadvantaged by losing the ability to use nitrate than model ecotypes with seasonal population dynamics similar to Synechococcus. The model predictions for the selective advantage associated with nitrate use are broadly consistent with the distribution of this ability among marine picocyanobacteria, and at finer scales, can provide insights into interactions between temporally varying ocean processes and

  7. Characteristics of products generated by selective sintering and stereolithography rapid prototyping processes

    Science.gov (United States)

    Cariapa, Vikram

    1993-01-01

    The trend in the modern global economy towards free market policies has motivated companies to use rapid prototyping technologies to not only reduce product development cycle time but also to maintain their competitive edge. A rapid prototyping technology is one which combines computer aided design with computer controlled tracking of focussed high energy source (eg. lasers, heat) on modern ceramic powders, metallic powders, plastics or photosensitive liquid resins in order to produce prototypes or models. At present, except for the process of shape melting, most rapid prototyping processes generate products that are only dimensionally similar to those of the desired end product. There is an urgent need, therefore, to enhance the understanding of the characteristics of these processes in order to realize their potential for production. Currently, the commercial market is dominated by four rapid prototyping processes, namely selective laser sintering, stereolithography, fused deposition modelling and laminated object manufacturing. This phase of the research has focussed on the selective laser sintering and stereolithography rapid prototyping processes. A theoretical model for these processes is under development. Different rapid prototyping sites supplied test specimens (based on ASTM 638-84, Type I) that have been measured and tested to provide a data base on surface finish, dimensional variation and ultimate tensile strength. Further plans call for developing and verifying the theoretical models by carefully designed experiments. This will be a joint effort between NASA and other prototyping centers to generate a larger database, thus encouraging more widespread usage by product designers.

  8. Within-host selection of drug resistance in a mouse model reveals dose-dependent selection of atovaquone resistance mutations

    NARCIS (Netherlands)

    Nuralitha, Suci; Murdiyarso, Lydia S.; Siregar, Josephine E.; Syafruddin, Din; Roelands, Jessica; Verhoef, Jan; Hoepelman, Andy I.M.; Marzuki, Sangkot

    2017-01-01

    The evolutionary selection of malaria parasites within an individual host plays a critical role in the emergence of drug resistance. We have compared the selection of atovaquone resistance mutants in mouse models reflecting two different causes of failure of malaria treatment, an inadequate

  9. Engineering development of selective agglomeration: Task 5, Bench- scale process testing

    Energy Technology Data Exchange (ETDEWEB)

    1991-09-01

    Under the overall objectives of DOE Contract Engineering Development of Selective Agglomeration,'' there were a number of specific objectives in the Task 5 program. The prime objectives of Task 5 are highlighted below: (1) Maximize process performance in pyritic sulfur rejection and BTU recovery, (2) Produce a low ash product, (3) Compare the performance of the heavy agglomerant process based on diesel and the light agglomerant process using heptane, (4) Define optimum processing conditions for engineering design, (5) Provide first-level evaluation of product handleability, and (6) Explore and investigate process options/ideas which may enhance process performance and/or product handleability.

  10. The Added Value of the Project Selection Process

    Directory of Open Access Journals (Sweden)

    Adel Oueslati

    2016-06-01

    Full Text Available The project selection process comes in the first stage of the overall project management life cycle. It does have a very important impact on organization success. The present paper provides defi nitions of the basic concepts and tools related to the project selection process. It aims to stress the added value of this process for the entire organization success. The mastery of the project selection process is the right way for any organization to ensure that it will do the right project with the right resources at the right time and within the right priorities

  11. Fermentation process diagnosis using a mathematical model

    Energy Technology Data Exchange (ETDEWEB)

    Yerushalmi, L; Volesky, B; Votruba, J

    1988-09-01

    Intriguing physiology of a solvent-producing strain of Clostridium acetobutylicum led to the synthesis of a mathematical model of the acetone-butanol fermentation process. The model presented is capable of describing the process dynamics and the culture behavior during a standard and a substandard acetone-butanol fermentation. In addition to the process kinetic parameters, the model includes the culture physiological parameters, such as the cellular membrane permeability and the number of membrane sites for active transport of sugar. Computer process simulation studies for different culture conditions used the model, and quantitatively pointed out the importance of selected culture parameters that characterize the cell membrane behaviour and play an important role in the control of solvent synthesis by the cell. The theoretical predictions by the new model were confirmed by experimental determination of the cellular membrane permeability.

  12. Goal selection versus process control in a brain-computer interface based on sensorimotor rhythms.

    Science.gov (United States)

    Royer, Audrey S; He, Bin

    2009-02-01

    In a brain-computer interface (BCI) utilizing a process control strategy, the signal from the cortex is used to control the fine motor details normally handled by other parts of the brain. In a BCI utilizing a goal selection strategy, the signal from the cortex is used to determine the overall end goal of the user, and the BCI controls the fine motor details. A BCI based on goal selection may be an easier and more natural system than one based on process control. Although goal selection in theory may surpass process control, the two have never been directly compared, as we are reporting here. Eight young healthy human subjects participated in the present study, three trained and five naïve in BCI usage. Scalp-recorded electroencephalograms (EEG) were used to control a computer cursor during five different paradigms. The paradigms were similar in their underlying signal processing and used the same control signal. However, three were based on goal selection, and two on process control. For both the trained and naïve populations, goal selection had more hits per run, was faster, more accurate (for seven out of eight subjects) and had a higher information transfer rate than process control. Goal selection outperformed process control in every measure studied in the present investigation.

  13. The cost of ethanol production from lignocellulosic biomass -- A comparison of selected alternative processes. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Grethlein, H.E.; Dill, T.

    1993-04-30

    The purpose of this report is to compare the cost of selected alternative processes for the conversion of lignocellulosic biomass to ethanol. In turn, this information will be used by the ARS/USDA to guide the management of research and development programs in biomass conversion. The report will identify where the cost leverages are for the selected alternatives and what performance parameters need to be achieved to improve the economics. The process alternatives considered here are not exhaustive, but are selected on the basis of having a reasonable potential in improving the economics of producing ethanol from biomass. When other alternatives come under consideration, they should be evaluated by the same methodology used in this report to give fair comparisons of opportunities. A generic plant design is developed for an annual production of 25 million gallons of anhydrous ethanol using corn stover as the model substrate at $30/dry ton. Standard chemical engineering techniques are used to give first order estimates of the capital and operating costs. Following the format of the corn to ethanol plant, there are nine sections to the plant; feed preparation, pretreatment, hydrolysis, fermentation, distillation and dehydration, stillage evaporation, storage and denaturation, utilities, and enzyme production. There are three pretreatment alternatives considered: the AFEX process, the modified AFEX process (which is abbreviated as MAFEX), and the STAKETECH process. These all use enzymatic hydrolysis and so an enzyme production section is included in the plant. The STAKETECH is the only commercially available process among the alternative processes.

  14. Flexible selection process based on skills applied to the communication manager position

    Directory of Open Access Journals (Sweden)

    Rita Jácome López

    2016-03-01

    Full Text Available The Communication Manager position is relevant to the reputation of an organization, because it influences through its own personal image and reputation; therefore, the selection of this manager has to be painstaking. In this paper we propose a flexible selection process based on fuzzy logic, to fit the skills of candidates for the job and to support decision making. We present two techniques:one that selects the best applicant and another one that compares candidates with an ideal constructed with information provided by Spanish managers.

  15. A model evaluation checklist for process-based environmental models

    Science.gov (United States)

    Jackson-Blake, Leah

    2015-04-01

    Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. The reasons for this were investigated for one commonly-applied model, the INtegrated model of CAtchment Phosphorus (INCA-P). Model output was compared to 18 months of daily water quality monitoring data in a small agricultural catchment in Scotland, and model structure, key model processes and internal model responses were examined. Although the model broadly reproduced dissolved phosphorus dynamics, it struggled with particulates. The reasons for poor performance were explored, together with ways in which improvements could be made. The process of critiquing and assessing model performance was then generalised to provide a broadly-applicable model evaluation checklist, incorporating: (1) Calibration challenges, relating to difficulties in thoroughly searching a high-dimensional parameter space and in selecting appropriate means of evaluating model performance. In this study, for example, model simplification was identified as a necessary improvement to reduce the number of parameters requiring calibration, whilst the traditionally-used Nash Sutcliffe model performance statistic was not able to discriminate between realistic and unrealistic model simulations, and alternative statistics were needed. (2) Data limitations, relating to a lack of (or uncertainty in) input data, data to constrain model parameters, data for model calibration and testing, and data to test internal model processes. In this study, model reliability could be improved by addressing all four kinds of data limitation. For example, there was insufficient surface water monitoring data for model testing against an independent dataset to that used in calibration, whilst additional monitoring of groundwater and effluent phosphorus inputs would help distinguish between alternative plausible model parameterisations. (3) Model structural inadequacies, whereby model structure may inadequately represent

  16. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2013-01-01

    Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.

  17. Large deviations for the Fleming-Viot process with neutral mutation and selection

    OpenAIRE

    Dawson, Donald; Feng, Shui

    1998-01-01

    Large deviation principles are established for the Fleming-Viot processes with neutral mutation and selection, and the corresponding equilibrium measures as the sampling rate goes to 0. All results are first proved for the finite allele model, and then generalized, through the projective limit technique, to the infinite allele model. Explicit expressions are obtained for the rate functions.

  18. Simultaneous data pre-processing and SVM classification model selection based on a parallel genetic algorithm applied to spectroscopic data of olive oils.

    Science.gov (United States)

    Devos, Olivier; Downey, Gerard; Duponchel, Ludovic

    2014-04-01

    Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Laser dimpling process parameters selection and optimization using surrogate-driven process capability space

    Science.gov (United States)

    Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz

    2017-08-01

    Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii

  20. Evaluation and selection of in-situ leaching mining method using analytic hierarchy process

    International Nuclear Information System (INIS)

    Zhao Heyong; Tan Kaixuan; Liu Huizhen

    2007-01-01

    According to the complicated conditions and main influence factors of in-situ leaching min- ing, a model and processes of analytic hierarchy are established for evaluation and selection of in-situ leaching mining methods based on analytic hierarchy process. Taking a uranium mine in Xinjiang of China for example, the application of this model is presented. The results of analyses and calculation indicate that the acid leaching is the optimum project. (authors)

  1. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  2. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  3. Evaluation and comparison of alternative fleet-level selective maintenance models

    International Nuclear Information System (INIS)

    Schneider, Kellie; Richard Cassady, C.

    2015-01-01

    Fleet-level selective maintenance refers to the process of identifying the subset of maintenance actions to perform on a fleet of repairable systems when the maintenance resources allocated to the fleet are insufficient for performing all desirable maintenance actions. The original fleet-level selective maintenance model is designed to maximize the probability that all missions in a future set are completed successfully. We extend this model in several ways. First, we consider a cost-based optimization model and show that a special case of this model maximizes the expected value of the number of successful missions in the future set. We also consider the situation in which one or more of the future missions may be canceled. These models and the original fleet-level selective maintenance optimization models are nonlinear. Therefore, we also consider an alternative model in which the objective function can be linearized. We show that the alternative model is a good approximation to the other models. - Highlights: • Investigate nonlinear fleet-level selective maintenance optimization models. • A cost based model is used to maximize the expected number of successful missions. • Another model is allowed to cancel missions if reliability is sufficiently low. • An alternative model has an objective function that can be linearized. • We show that the alternative model is a good approximation to the other models

  4. An Empirical Study of Wrappers for Feature Subset Selection based on a Parallel Genetic Algorithm: The Multi-Wrapper Model

    KAUST Repository

    Soufan, Othman

    2012-09-01

    Feature selection is the first task of any learning approach that is applied in major fields of biomedical, bioinformatics, robotics, natural language processing and social networking. In feature subset selection problem, a search methodology with a proper criterion seeks to find the best subset of features describing data (relevance) and achieving better performance (optimality). Wrapper approaches are feature selection methods which are wrapped around a classification algorithm and use a performance measure to select the best subset of features. We analyze the proper design of the objective function for the wrapper approach and highlight an objective based on several classification algorithms. We compare the wrapper approaches to different feature selection methods based on distance and information based criteria. Significant improvement in performance, computational time, and selection of minimally sized feature subsets is achieved by combining different objectives for the wrapper model. In addition, considering various classification methods in the feature selection process could lead to a global solution of desirable characteristics.

  5. Process-based models of feeding and prey selection in larval fish

    DEFF Research Database (Denmark)

    Fiksen, O.; MacKenzie, Brian

    2002-01-01

    believed to be important to prey selectivity and environmental regulation of feeding in fish. We include the sensitivity of prey to the hydrodynamic signal generated by approaching larval fish and a simple model of the potential loss of prey due to turbulence whereby prey is lost if it leaves...... jig dry wt l(-1). The spatio-temporal fluctuation of turbulence (tidal cycle) and light (sun height) over the bank generates complex structure in the patterns of food intake of larval fish, with different patterns emerging for small and large larvae....

  6. Algorithms of control parameters selection for automation of FDM 3D printing process

    Directory of Open Access Journals (Sweden)

    Kogut Paweł

    2017-01-01

    Full Text Available The paper presents algorithms of control parameters selection of the Fused Deposition Modelling (FDM technology in case of an open printing solutions environment and 3DGence ONE printer. The following parameters were distinguished: model mesh density, material flow speed, cooling performance, retraction and printing speeds. These parameters are independent in principle printing system, but in fact to a certain degree that results from the selected printing equipment features. This is the first step for automation of the 3D printing process in FDM technology.

  7. Multiphysics modeling of selective laser sintering/melting

    Science.gov (United States)

    Ganeriwala, Rishi Kumar

    A significant percentage of total global employment is due to the manufacturing industry. However, manufacturing also accounts for nearly 20% of total energy usage in the United States according to the EIA. In fact, manufacturing accounted for 90% of industrial energy consumption and 84% of industry carbon dioxide emissions in 2002. Clearly, advances in manufacturing technology and efficiency are necessary to curb emissions and help society as a whole. Additive manufacturing (AM) refers to a relatively recent group of manufacturing technologies whereby one can 3D print parts, which has the potential to significantly reduce waste, reconfigure the supply chain, and generally disrupt the whole manufacturing industry. Selective laser sintering/melting (SLS/SLM) is one type of AM technology with the distinct advantage of being able to 3D print metals and rapidly produce net shape parts with complicated geometries. In SLS/SLM parts are built up layer-by-layer out of powder particles, which are selectively sintered/melted via a laser. However, in order to produce defect-free parts of sufficient strength, the process parameters (laser power, scan speed, layer thickness, powder size, etc.) must be carefully optimized. Obviously, these process parameters will vary depending on material, part geometry, and desired final part characteristics. Running experiments to optimize these parameters is costly, energy intensive, and extremely material specific. Thus a computational model of this process would be highly valuable. In this work a three dimensional, reduced order, coupled discrete element - finite difference model is presented for simulating the deposition and subsequent laser heating of a layer of powder particles sitting on top of a substrate. Validation is provided and parameter studies are conducted showing the ability of this model to help determine appropriate process parameters and an optimal powder size distribution for a given material. Next, thermal stresses upon

  8. Sato Processes in Default Modeling

    DEFF Research Database (Denmark)

    Kokholm, Thomas; Nicolato, Elisa

    -change of a homogeneous Levy process. While the processes in these two classes share the same average behavior over time, the associated intensities exhibit very different properties. Concrete specifications are calibrated to data on the single names included in the iTraxx Europe index. The performances are compared......In reduced form default models, the instantaneous default intensity is classically the modeling object. Survival probabilities are then given by the Laplace transform of the cumulative hazard defined as the integrated intensity process. Instead, recent literature has shown a tendency towards...... specifying the cumulative hazard process directly. Within this framework we present a new model class where cumulative hazards are described by self-similar additive processes, also known as Sato processes. Furthermore we also analyze specifications obtained via a simple deterministic time...

  9. Covariate selection for the semiparametric additive risk model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared...... and study their large sample properties for the situation where the number of covariates p is smaller than the number of observations. We also show that the adaptive Lasso has the oracle property. In many practical situations, it is more relevant to tackle the situation with large p compared with the number...... of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare...

  10. Temporally selective attention supports speech processing in 3- to 5-year-old children.

    Science.gov (United States)

    Astheimer, Lori B; Sanders, Lisa D

    2012-01-01

    Recent event-related potential (ERP) evidence demonstrates that adults employ temporally selective attention to preferentially process the initial portions of words in continuous speech. Doing so is an effective listening strategy since word-initial segments are highly informative. Although the development of this process remains unexplored, directing attention to word onsets may be important for speech processing in young children who would otherwise be overwhelmed by the rapidly changing acoustic signals that constitute speech. We examined the use of temporally selective attention in 3- to 5-year-old children listening to stories by comparing ERPs elicited by attention probes presented at four acoustically matched times relative to word onsets: concurrently with a word onset, 100 ms before, 100 ms after, and at random control times. By 80 ms, probes presented at and after word onsets elicited a larger negativity than probes presented before word onsets or at control times. The latency and distribution of this effect is similar to temporally and spatially selective attention effects measured in adults and, despite differences in polarity, spatially selective attention effects measured in children. These results indicate that, like adults, preschool aged children modulate temporally selective attention to preferentially process the initial portions of words in continuous speech. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Discrimination against international medical graduates in the United States residency program selection process.

    Science.gov (United States)

    Desbiens, Norman A; Vidaillet, Humberto J

    2010-01-25

    Available evidence suggests that international medical graduates have improved the availability of U.S. health care while maintaining academic standards. We wondered whether studies had been conducted to address how international graduates were treated in the post-graduate selection process compared to U.S. graduates. We conducted a Medline search for research on the selection process. Two studies provide strong evidence that psychiatry and family practice programs respond to identical requests for applications at least 80% more often for U.S. medical graduates than for international graduates. In a third study, a survey of surgical program directors, over 70% perceived that there was discrimination against international graduates in the selection process. There is sufficient evidence to support action against discrimination in the selection process. Medical organizations should publish explicit proscriptions of discrimination against international medical graduates (as the American Psychiatric Association has done) and promote them in diversity statements. They should develop uniform and transparent policies for program directors to use to select applicants that minimize the possibility of non-academic discrimination, and the accreditation organization should monitor whether it is occurring. Whether there should be protectionism for U.S. graduates or whether post-graduate medical education should be an unfettered meritocracy needs to be openly discussed by medicine and society.

  12. Model Identification of Integrated ARMA Processes

    Science.gov (United States)

    Stadnytska, Tetiana; Braun, Simone; Werner, Joachim

    2008-01-01

    This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…

  13. Computer modeling of lung cancer diagnosis-to-treatment process.

    Science.gov (United States)

    Ju, Feng; Lee, Hyo Kyung; Osarogiagbon, Raymond U; Yu, Xinhua; Faris, Nick; Li, Jingshan

    2015-08-01

    We introduce an example of a rigorous, quantitative method for quality improvement in lung cancer care-delivery. Computer process modeling methods are introduced for lung cancer diagnosis, staging and treatment selection process. Two types of process modeling techniques, discrete event simulation (DES) and analytical models, are briefly reviewed. Recent developments in DES are outlined and the necessary data and procedures to develop a DES model for lung cancer diagnosis, leading up to surgical treatment process are summarized. The analytical models include both Markov chain model and closed formulas. The Markov chain models with its application in healthcare are introduced and the approach to derive a lung cancer diagnosis process model is presented. Similarly, the procedure to derive closed formulas evaluating the diagnosis process performance is outlined. Finally, the pros and cons of these methods are discussed.

  14. Risk calculations in the manufacturing technology selection process

    DEFF Research Database (Denmark)

    Farooq, S.; O'Brien, C.

    2010-01-01

    Purpose - The purpose of this paper is to present result obtained from a developed technology selection framework and provide a detailed insight into the risk calculations and their implications in manufacturing technology selection process. Design/methodology/approach - The results illustrated...... in the paper are the outcome of an action research study that was conducted in an aerospace company. Findings - The paper highlights the role of risk calculations in manufacturing technology selection process by elaborating the contribution of risk associated with manufacturing technology alternatives...... in the shape of opportunities and threats in different decision-making environments. Practical implications - The research quantifies the risk associated with different available manufacturing technology alternatives. This quantification of risk crystallises the process of technology selection decision making...

  15. The Comparative Effect of Top-down Processing and Bottom-up Processing through TBLT on Extrovert and Introvert EFL

    Directory of Open Access Journals (Sweden)

    Pezhman Nourzad Haradasht

    2013-09-01

    Full Text Available This research seeks to examine the effect of two models of reading comprehension, namely top-down and bottom-up processing, on the reading comprehension of extrovert and introvert EFL learners’ reading comprehension. To do this, 120 learners out of a total number of 170 intermediate learners being educated at Iran Mehr English Language School were selected all taking a PET (Preliminary English Test first for homogenization prior to the study. They also answered the Eysenck Personality Inventory (EPI which in turn categorized them into two subgroups within each reading models consisting of introverts and extroverts. All in all, there were four subgroups: 30 introverts and 30 extroverts undergoing the top-down processing treatment, and 30 introverts and 30 extroverts experiencing the bottom-up processing treatment. The aforementioned PET was administered as the post test of the study after each group was exposed to the treatment for 18 sessions in six weeks. After the instructions finished, the mean scores of all four groups on this post test were computed and a two-way ANOVA was run to test all the four hypotheses raise in this study. the results showed that while learners generally benefitted more from the bottom-up processing setting compared  to the top-down processing one, the extrovert group was better off receiving top-down instruction. Furthermore, introverts outperformed extroverts in bottom-up group; yet between the two personalities subgroups in the top-down setting no difference was seen. A predictable pattern of benefitting from teaching procedures could not be drawn for introverts as in both top-down and bottom-up settings, they benefitted more than extroverts.

  16. A concurrent optimization model for supplier selection with fuzzy quality loss

    International Nuclear Information System (INIS)

    Rosyidi, C.; Murtisari, R.; Jauhari, W.

    2017-01-01

    The purpose of this research is to develop a concurrent supplier selection model to minimize the purchasing cost and fuzzy quality loss considering process capability and assembled product specification. Design/methodology/approach: This research integrates fuzzy quality loss in the model to concurrently solve the decision making in detailed design stage and manufacturing stage. Findings: The resulted model can be used to concurrently select the optimal supplier and determine the tolerance of the components. The model balances the purchasing cost and fuzzy quality loss. Originality/value: An assembled product consists of many components which must be purchased from the suppliers. Fuzzy quality loss is integrated in the supplier selection model to allow the vagueness in final assembly by grouping the assembly into several grades according to the resulted assembly tolerance.

  17. A concurrent optimization model for supplier selection with fuzzy quality loss

    Energy Technology Data Exchange (ETDEWEB)

    Rosyidi, C.; Murtisari, R.; Jauhari, W.

    2017-07-01

    The purpose of this research is to develop a concurrent supplier selection model to minimize the purchasing cost and fuzzy quality loss considering process capability and assembled product specification. Design/methodology/approach: This research integrates fuzzy quality loss in the model to concurrently solve the decision making in detailed design stage and manufacturing stage. Findings: The resulted model can be used to concurrently select the optimal supplier and determine the tolerance of the components. The model balances the purchasing cost and fuzzy quality loss. Originality/value: An assembled product consists of many components which must be purchased from the suppliers. Fuzzy quality loss is integrated in the supplier selection model to allow the vagueness in final assembly by grouping the assembly into several grades according to the resulted assembly tolerance.

  18. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  19. Model selection in periodic autoregressions

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1994-01-01

    textabstractThis paper focuses on the issue of period autoagressive time series models (PAR) selection in practice. One aspect of model selection is the choice for the appropriate PAR order. This can be of interest for the valuation of economic models. Further, the appropriate PAR order is important

  20. Exploring selection and recruitment processes for newly qualified nurses: a sequential-explanatory mixed-method study.

    Science.gov (United States)

    Newton, Paul; Chandler, Val; Morris-Thomson, Trish; Sayer, Jane; Burke, Linda

    2015-01-01

    To map current selection and recruitment processes for newly qualified nurses and to explore the advantages and limitations of current selection and recruitment processes. The need to improve current selection and recruitment practices for newly qualified nurses is highlighted in health policy internationally. A cross-sectional, sequential-explanatory mixed-method design with 4 components: (1) Literature review of selection and recruitment of newly qualified nurses; and (2) Literature review of a public sector professions' selection and recruitment processes; (3) Survey mapping existing selection and recruitment processes for newly qualified nurses; and (4) Qualitative study about recruiters' selection and recruitment processes. Literature searches on the selection and recruitment of newly qualified candidates in teaching and nursing (2005-2013) were conducted. Cross-sectional, mixed-method data were collected from thirty-one (n = 31) individuals in health providers in London who had responsibility for the selection and recruitment of newly qualified nurses using a survey instrument. Of these providers who took part, six (n = 6) purposively selected to be interviewed qualitatively. Issues of supply and demand in the workforce, rather than selection and recruitment tools, predominated in the literature reviews. Examples of tools to measure values, attitudes and skills were found in the nursing literature. The mapping exercise found that providers used many selection and recruitment tools, some providers combined tools to streamline process and assure quality of candidates. Most providers had processes which addressed the issue of quality in the selection and recruitment of newly qualified nurses. The 'assessment centre model', which providers were adopting, allowed for multiple levels of assessment and streamlined recruitment. There is a need to validate the efficacy of the selection tools. © 2014 John Wiley & Sons Ltd.

  1. 7 CFR 1469.6 - Enrollment criteria and selection process.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Enrollment criteria and selection process. 1469.6... General Provisions § 1469.6 Enrollment criteria and selection process. (a) Selection and funding of... existing natural resource, environmental quality, and agricultural activity data along with other...

  2. Stock Selection for Portfolios Using Expected Utility-Entropy Decision Model

    Directory of Open Access Journals (Sweden)

    Jiping Yang

    2017-09-01

    Full Text Available Yang and Qiu proposed and then recently improved an expected utility-entropy (EU-E measure of risk and decision model. When segregation holds, Luce et al. derived an expected utility term, plus a constant multiplies the Shannon entropy as the representation of risky choices, further demonstrating the reasonability of the EU-E decision model. In this paper, we apply the EU-E decision model to selecting the set of stocks to be included in the portfolios. We first select 7 and 10 stocks from the 30 component stocks of Dow Jones Industrial Average index, and then derive and compare the efficient portfolios in the mean-variance framework. The conclusions imply that efficient portfolios composed of 7(10 stocks selected using the EU-E model with intermediate intervals of the tradeoff coefficients are more efficient than that composed of the sets of stocks selected using the expected utility model. Furthermore, the efficient portfolio of 7(10 stocks selected by the EU-E decision model have almost the same efficient frontier as that of the sample of all stocks. This suggests the necessity of incorporating both the expected utility and Shannon entropy together when taking risky decisions, further demonstrating the importance of Shannon entropy as the measure of uncertainty, as well as the applicability of the EU-E model as a decision-making model.

  3. Computationally efficient thermal-mechanical modelling of selective laser melting

    NARCIS (Netherlands)

    Yang, Y.; Ayas, C.; Brabazon, Dermot; Naher, Sumsun; Ul Ahad, Inam

    2017-01-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is

  4. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  5. The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

    Science.gov (United States)

    Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise

    2017-11-01

    The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.

  6. A Comparative Analysis of Trade Facilitation in Selected Regional and Bilateral Trade Agreement

    OpenAIRE

    Institute for International Trade

    2006-01-01

    This study compared the treatment of trade facilitation in four selected regional trade agreements, AFTA, APEC, SAFRA and PACER, and in one bilateral free trade agreement being the Australia-Singapore Free Trade Agreement (ASFTA), with a view to determining model trade facilitation principles and measures which may be instructive for developing country negotiations and policy makers.

  7. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    Science.gov (United States)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  8. Modeling Dynamic Food Choice Processes to Understand Dietary Intervention Effects.

    Science.gov (United States)

    Marcum, Christopher Steven; Goldring, Megan R; McBride, Colleen M; Persky, Susan

    2018-02-17

    Meal construction is largely governed by nonconscious and habit-based processes that can be represented as a collection of in dividual, micro-level food choices that eventually give rise to a final plate. Despite this, dietary behavior intervention research rarely captures these micro-level food choice processes, instead measuring outcomes at aggregated levels. This is due in part to a dearth of analytic techniques to model these dynamic time-series events. The current article addresses this limitation by applying a generalization of the relational event framework to model micro-level food choice behavior following an educational intervention. Relational event modeling was used to model the food choices that 221 mothers made for their child following receipt of an information-based intervention. Participants were randomized to receive either (a) control information; (b) childhood obesity risk information; (c) childhood obesity risk information plus a personalized family history-based risk estimate for their child. Participants then made food choices for their child in a virtual reality-based food buffet simulation. Micro-level aspects of the built environment, such as the ordering of each food in the buffet, were influential. Other dynamic processes such as choice inertia also influenced food selection. Among participants receiving the strongest intervention condition, choice inertia decreased and the overall rate of food selection increased. Modeling food selection processes can elucidate the points at which interventions exert their influence. Researchers can leverage these findings to gain insight into nonconscious and uncontrollable aspects of food selection that influence dietary outcomes, which can ultimately improve the design of dietary interventions.

  9. Informative gene selection using Adaptive Analytic Hierarchy Process (A2HP

    Directory of Open Access Journals (Sweden)

    Abhishek Bhola

    2017-12-01

    Full Text Available Gene expression dataset derived from microarray experiments are marked by large number of genes, which contains the gene expression values at different sample conditions/time-points. Selection of informative genes from these large datasets is an issue of major concern for various researchers and biologists. In this study, we propose a gene selection and dimensionality reduction method called Adaptive Analytic Hierarchy Process (A2HP. Traditional analytic hierarchy process is a multiple-criteria based decision analysis method whose result depends upon the expert knowledge or decision makers. It is mainly used to solve the decision problems in different fields. On the other hand, A2HP is a fused method that combines the outcomes of five individual gene selection ranking methods t-test, chi-square variance test, z-test, wilcoxon test and signal-to-noise ratio (SNR. At first, the preprocessing of gene expression dataset is done and then the reduced number of genes obtained, will be fed as input for A2HP. A2HP utilizes both quantitative and qualitative factors to select the informative genes. Results demonstrate that A2HP selects efficient number of genes as compared to the individual gene selection methods. The percentage of deduction in number of genes and time complexity are taken as the performance measure for the proposed method. And it is shown that A2HP outperforms individual gene selection methods.

  10. The Automation of Nowcast Model Assessment Processes

    Science.gov (United States)

    2016-09-01

    secondly, provide modelers with the information needed to understand the model errors and how their algorithm changes might mitigate these errors. In...by ARL modelers. 2. Development Environment The automation of Point-Stat processes (i.e., PSA) was developed using Python 3.5.* Python was selected...because it is easy to use, widely used for scripting, and satisfies all the requirements to automate the implementation of the Point-Stat tool. In

  11. Research and Application of a Novel Hybrid Model Based on Data Selection and Artificial Intelligence Algorithm for Short Term Load Forecasting

    Directory of Open Access Journals (Sweden)

    Wendong Yang

    2017-01-01

    Full Text Available Machine learning plays a vital role in several modern economic and industrial fields, and selecting an optimized machine learning method to improve time series’ forecasting accuracy is challenging. Advanced machine learning methods, e.g., the support vector regression (SVR model, are widely employed in forecasting fields, but the individual SVR pays no attention to the significance of data selection, signal processing and optimization, which cannot always satisfy the requirements of time series forecasting. By preprocessing and analyzing the original time series, in this paper, a hybrid SVR model is developed, considering periodicity, trend and randomness, and combined with data selection, signal processing and an optimization algorithm for short-term load forecasting. Case studies of electricity power data from New South Wales and Singapore are regarded as exemplifications to estimate the performance of the developed novel model. The experimental results demonstrate that the proposed hybrid method is not only robust but also capable of achieving significant improvement compared with the traditional single models and can be an effective and efficient tool for power load forecasting.

  12. A Heckman selection model for the safety analysis of signalized intersections.

    Directory of Open Access Journals (Sweden)

    Xuecai Xu

    Full Text Available The objective of this paper is to provide a new method for estimating crash rate and severity simultaneously.This study explores a Heckman selection model of the crash rate and severity simultaneously at different levels and a two-step procedure is used to investigate the crash rate and severity levels. The first step uses a probit regression model to determine the sample selection process, and the second step develops a multiple regression model to simultaneously evaluate the crash rate and severity for slight injury/kill or serious injury (KSI, respectively. The model uses 555 observations from 262 signalized intersections in the Hong Kong metropolitan area, integrated with information on the traffic flow, geometric road design, road environment, traffic control and any crashes that occurred during two years.The results of the proposed two-step Heckman selection model illustrate the necessity of different crash rates for different crash severity levels.A comparison with the existing approaches suggests that the Heckman selection model offers an efficient and convenient alternative method for evaluating the safety performance at signalized intersections.

  13. Selective detachment process in column flotation froth

    Energy Technology Data Exchange (ETDEWEB)

    Honaker, R.Q.; Ozsever, A.V.; Parekh, B.K. [University of Kentucky, Lexington, KY (United States). Dept. of Mining Engineering

    2006-05-15

    The selectivity in flotation columns involving the separation of particles of varying degrees of floatability is based on differential flotation rates in the collection zone, reflux action between the froth and collection zones, and differential detachment rates in the froth zone. Using well-known theoretical models describing the separation process and experimental data, froth zone and overall flotation recovery values were quantified for particles in an anthracite coal that have a wide range of floatability potential. For highly floatable particles, froth recovery had a very minimal impact on overall recovery while the recovery of weakly floatable material was decreased substantially by reductions in froth recovery values. In addition, under carrying-capacity limiting conditions, selectivity was enhanced by the preferential detachment of the weakly floatable material. Based on this concept, highly floatable material was added directly into the froth zone when treating the anthracite coal. The enriched froth phase reduced the product ash content of the anthracite product by five absolute percentage points while maintaining a constant recovery value.

  14. Gender Differences in Resistance to Schooling: The Role of Dynamic Peer-Influence and Selection Processes.

    Science.gov (United States)

    Geven, Sara; O Jonsson, Jan; van Tubergen, Frank

    2017-12-01

    Boys engage in notably higher levels of resistance to schooling than girls. While scholars argue that peer processes contribute to this gender gap, this claim has not been tested with longitudinal quantitative data. This study fills this lacuna by examining the role of dynamic peer-selection and influence processes in the gender gap in resistance to schooling (i.e., arguing with teachers, skipping class, not putting effort into school, receiving punishments at school, and coming late to class) with two-wave panel data. We expect that, compared to girls, boys are more exposed and more responsive to peers who exhibit resistant behavior. We estimate hybrid models on 5448 students from 251 school classes in Sweden (14-15 years, 49% boys), and stochastic actor-based models (SIENA) on a subsample of these data (2480 students in 98 classes; 49% boys). We find that boys are more exposed to resistant friends than girls, and that adolescents are influenced by the resistant behavior of friends. These peer processes do not contribute to a widening of the gender gap in resistance to schooling, yet they contribute somewhat to the persistence of the initial gender gap. Boys are not more responsive to the resistant behavior of friends than girls. Instead, girls are influenced more by the resistant behavior of lower status friends than boys. This explains to some extent why boys increase their resistance to schooling more over time. All in all, peer-influence and selection processes seem to play a minor role in gender differences in resistance to schooling. These findings nuance under investigated claims that have been made in the literature.

  15. A Comparative Analysis of Extract, Transformation and Loading (ETL) Process

    Science.gov (United States)

    Runtuwene, J. P. A.; Tangkawarow, I. R. H. T.; Manoppo, C. T. M.; Salaki, R. J.

    2018-02-01

    The current growth of data and information occurs rapidly in varying amount and media. These types of development will eventually produce large number of data better known as the Big Data. Business Intelligence (BI) utilizes large number of data and information for analysis so that one can obtain important information. This type of information can be used to support decision-making process. In practice a process integrating existing data and information into data warehouse is needed. This data integration process is known as Extract, Transformation and Loading (ETL). In practice, many applications have been developed to carry out the ETL process, but selection which applications are more time, cost and power effective and efficient may become a challenge. Therefore, the objective of the study was to provide comparative analysis through comparison between the ETL process using Microsoft SQL Server Integration Service (SSIS) and one using Pentaho Data Integration (PDI).

  16. Modeling Suspension and Continuation of a Process

    Directory of Open Access Journals (Sweden)

    Oleg Svatos

    2012-04-01

    Full Text Available This work focuses on difficulties an analyst encounters when modeling suspension and continuation of a process in contemporary process modeling languages. As a basis there is introduced general lifecycle of an activity which is then compared to activity lifecycles supported by individual process modeling languages. The comparison shows that the contemporary process modeling languages cover the defined general lifecycle of an activity only partially. There are picked two popular process modeling languages and there is modeled real example, which reviews how the modeling languages can get along with their lack of native support of suspension and continuation of an activity. Upon the unsatisfying results of the contemporary process modeling languages in the modeled example, there is presented a new process modeling language which, as demonstrated, is capable of capturing suspension and continuation of an activity in much simpler and precise way.

  17. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    Science.gov (United States)

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  18. Experimental demonstration of selective quantum process tomography on an NMR quantum information processor

    Science.gov (United States)

    Gaikwad, Akshay; Rehal, Diksha; Singh, Amandeep; Arvind, Dorai, Kavita

    2018-02-01

    We present the NMR implementation of a scheme for selective and efficient quantum process tomography without ancilla. We generalize this scheme such that it can be implemented efficiently using only a set of measurements involving product operators. The method allows us to estimate any element of the quantum process matrix to a desired precision, provided a set of quantum states can be prepared efficiently. Our modified technique requires fewer experimental resources as compared to the standard implementation of selective and efficient quantum process tomography, as it exploits the special nature of NMR measurements to allow us to compute specific elements of the process matrix by a restrictive set of subsystem measurements. To demonstrate the efficacy of our scheme, we experimentally tomograph the processes corresponding to "no operation," a controlled-NOT (CNOT), and a controlled-Hadamard gate on a two-qubit NMR quantum information processor, with high fidelities.

  19. The partner selection process : Steps, effectiveness, governance

    NARCIS (Netherlands)

    Duisters, D.; Duijsters, G.M.; de Man, A.P.

    2011-01-01

    Selecting the right partner is important for creating value in alliances. Even though prior research suggests that a structured partner selection process increases alliance success, empirical research remains scarce. This paper presents an explorative empirical study that shows that some steps in

  20. The partner selection process : steps, effectiveness, governance

    NARCIS (Netherlands)

    Duisters, D.; Duysters, G.M.; Man, de A.P.

    2011-01-01

    Selecting the right partner is important for creating value in alliances. Even though prior research suggests that a structured partner selection process increases alliance success, empirical research remains scarce. This paper presents an explorative empirical study that shows that some steps in

  1. PopGen Fishbowl: A Free Online Simulation Model of Microevolutionary Processes

    Science.gov (United States)

    Jones, Thomas C.; Laughlin, Thomas F.

    2010-01-01

    Natural selection and other components of evolutionary theory are known to be particularly challenging concepts for students to understand. To help illustrate these concepts, we developed a simulation model of microevolutionary processes. The model features all the components of Hardy-Weinberg theory, with population size, selection, gene flow,…

  2. Advantages and disadvantages of an objective selection process for early intervention in employees at risk for sickness absence.

    Science.gov (United States)

    Duijts, Saskia F A; Kant, Ijmert; Swaen, Gerard M H

    2007-05-02

    It is unclear if objective selection of employees, for an intervention to prevent sickness absence, is more effective than subjective 'personal enlistment'. We hypothesize that objectively selected employees are 'at risk' for sickness absence and eligible to participate in the intervention program. The dispatch of 8603 screening instruments forms the starting point of the objective selection process. Different stages of this process, throughout which employees either dropped out or were excluded, were described and compared with the subjective selection process. Characteristics of ineligible and ultimately selected employees, for a randomized trial, were described and quantified using sickness absence data. Overall response rate on the screening instrument was 42.0%. Response bias was found for the parameters sex and age, but not for sickness absence. Sickness absence was higher in the 'at risk' (N = 212) group (42%) compared to the 'not at risk' (N = 2503) group (25%) (OR 2.17 CI 1.63-2.89; p = 0.000). The selection process ended with the successful inclusion of 151 eligible, i.e. 2% of the approached employees in the trial. The study shows that objective selection of employees for early intervention is effective. Despite methodological and practical problems, selected employees are actually those at risk for sickness absence, who will probably benefit more from the intervention program than others.

  3. In situ process monitoring in selective laser sintering using optical coherence tomography

    Science.gov (United States)

    Gardner, Michael R.; Lewis, Adam; Park, Jongwan; McElroy, Austin B.; Estrada, Arnold D.; Fish, Scott; Beaman, Joseph J.; Milner, Thomas E.

    2018-04-01

    Selective laser sintering (SLS) is an efficient process in additive manufacturing that enables rapid part production from computer-based designs. However, SLS is limited by its notable lack of in situ process monitoring when compared with other manufacturing processes. We report the incorporation of optical coherence tomography (OCT) into an SLS system in detail and demonstrate access to surface and subsurface features. Video frame rate cross-sectional imaging reveals areas of sintering uniformity and areas of excessive heat error with high temporal resolution. We propose a set of image processing techniques for SLS process monitoring with OCT and report the limitations and obstacles for further OCT integration with SLS systems.

  4. Exploring the role of motivational and coping resources in a Special Forces selection process

    Directory of Open Access Journals (Sweden)

    Marié de Beer

    2014-07-01

    Research purpose: The purpose was to compare selected and not-selected candidates in terms of their sense of coherence, hardiness, locus of control and self-efficacy and to explore what they considered important for success in the selection process. Motivation for the study: Because of high attrition rates in Special Forces selection, the evaluation of the role of motivation and coping resources in terms of possible predictive utility could benefit the organisation from a logistical, financial and efficiency point of view. Research design, approach and method: A mixed-method cross-sectional survey design was used to assess an all-male candidate group (N = 73. The selected and not-selected groups were compared with regard to their sense of coherence, hardiness, locus of control and self-efficacy mean scores. Main findings: No statistically significant differences were found between the mean scores of the two groups concerning the quantitative measures used. Practical/managerial implications: The quantitative measures generally showed acceptable coefficient alpha reliabilities. Although no statistically significant mean differences were found between the groups, candidates showed high levels of sense of coherence, high levels of self-efficacy and average levels of hardiness and internal locus of control. The qualitative data confirmed the relevance of the quantitative constructs and pointed to additional aspects already considered in preparation for and during the selection process. Contribution/value-add: The results provide information regarding the constructs and measures used in a military context.

  5. Pedagogic process modeling: Humanistic-integrative approach

    Directory of Open Access Journals (Sweden)

    Boritko Nikolaj M.

    2007-01-01

    Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .

  6. Optimizing selective cutting strategies for maximum carbon stocks and yield of Moso bamboo forest using BIOME-BGC model.

    Science.gov (United States)

    Mao, Fangjie; Zhou, Guomo; Li, Pingheng; Du, Huaqiang; Xu, Xiaojun; Shi, Yongjun; Mo, Lufeng; Zhou, Yufeng; Tu, Guoqing

    2017-04-15

    The selective cutting method currently used in Moso bamboo forests has resulted in a reduction of stand productivity and carbon sequestration capacity. Given the time and labor expense involved in addressing this problem manually, simulation using an ecosystem model is the most suitable approach. The BIOME-BGC model was improved to suit managed Moso bamboo forests, which was adapted to include age structure, specific ecological processes and management measures of Moso bamboo forest. A field selective cutting experiment was done in nine plots with three cutting intensities (high-intensity, moderate-intensity and low-intensity) during 2010-2013, and biomass of these plots was measured for model validation. Then four selective cutting scenarios were simulated by the improved BIOME-BGC model to optimize the selective cutting timings, intervals, retained ages and intensities. The improved model matched the observed aboveground carbon density and yield of different plots, with a range of relative error from 9.83% to 15.74%. The results of different selective cutting scenarios suggested that the optimal selective cutting measure should be cutting 30% culms of age 6, 80% culms of age 7, and all culms thereafter (above age 8) in winter every other year. The vegetation carbon density and harvested carbon density of this selective cutting method can increase by 74.63% and 21.5%, respectively, compared with the current selective cutting measure. The optimized selective cutting measure developed in this study can significantly promote carbon density, yield, and carbon sink capacity in Moso bamboo forests. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Models of microbiome evolution incorporating host and microbial selection.

    Science.gov (United States)

    Zeng, Qinglong; Wu, Steven; Sukumaran, Jeet; Rodrigo, Allen

    2017-09-25

    Numerous empirical studies suggest that hosts and microbes exert reciprocal selective effects on their ecological partners. Nonetheless, we still lack an explicit framework to model the dynamics of both hosts and microbes under selection. In a previous study, we developed an agent-based forward-time computational framework to simulate the neutral evolution of host-associated microbial communities in a constant-sized, unstructured population of hosts. These neutral models allowed offspring to sample microbes randomly from parents and/or from the environment. Additionally, the environmental pool of available microbes was constituted by fixed and persistent microbial OTUs and by contributions from host individuals in the preceding generation. In this paper, we extend our neutral models to allow selection to operate on both hosts and microbes. We do this by constructing a phenome for each microbial OTU consisting of a sample of traits that influence host and microbial fitnesses independently. Microbial traits can influence the fitness of hosts ("host selection") and the fitness of microbes ("trait-mediated microbial selection"). Additionally, the fitness effects of traits on microbes can be modified by their hosts ("host-mediated microbial selection"). We simulate the effects of these three types of selection, individually or in combination, on microbiome diversities and the fitnesses of hosts and microbes over several thousand generations of hosts. We show that microbiome diversity is strongly influenced by selection acting on microbes. Selection acting on hosts only influences microbiome diversity when there is near-complete direct or indirect parental contribution to the microbiomes of offspring. Unsurprisingly, microbial fitness increases under microbial selection. Interestingly, when host selection operates, host fitness only increases under two conditions: (1) when there is a strong parental contribution to microbial communities or (2) in the absence of a strong

  8. Numerical simulation of complex part manufactured by selective laser melting process

    Science.gov (United States)

    Van Belle, Laurent

    2017-10-01

    Selective Laser Melting (SLM) process belonging to the family of the Additive Manufacturing (AM) technologies, enable to build parts layer by layer, from metallic powder and a CAD model. Physical phenomena that occur in the process have the same issues as conventional welding. Thermal gradients generate significant residual stresses and distortions in the parts. Moreover, the large and complex parts to manufacturing, accentuate the undesirable effects. Therefore, it is essential for manufacturers to offer a better understanding of the process and to ensure production reliability of parts with high added value. This paper focuses on the simulation of manufacturing turbine by SLM process in order to calculate residual stresses and distortions. Numerical results will be presented.

  9. Bayesian model selection of template forward models for EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Pareto genealogies arising from a Poisson branching evolution model with selection.

    Science.gov (United States)

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  11. Measures and limits of models of fixation selection.

    Directory of Open Access Journals (Sweden)

    Niklas Wilming

    Full Text Available Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure and the KL-divergence (a distance measure of probability distributions combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.

  12. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    overall operation. It operates by constructing a large collection of decorrelated classification trees, and then predicts the permafrost occurrence through a majority vote. With the so-called out-of-bag (OOB) error estimate, the classification of permafrost data can be validated as well as the contribution of each predictor can be assessed. The performances of compared permafrost distribution models (computed on independent testing sets) increased with the application of FS algorithms on the original dataset and irrelevant or redundant variables were removed. As a consequence, the process provided faster and more cost-effective predictors and a better understanding of the underlying structures residing in permafrost data. Our work demonstrates the usefulness of a feature selection step prior to applying a machine learning algorithm. In fact, permafrost predictors could be ranked not only based on their heuristic and subjective importance (expert knowledge), but also based on their statistical relevance in relation of the permafrost distribution.

  13. The effect of addition of selected carrageenans on viscoelastic properties of model processed cheese spreads

    Directory of Open Access Journals (Sweden)

    Michaela Černíková

    2007-01-01

    Full Text Available The effect of 0.25% w/w κ-carrageenan and ι‑carrageenan on viscoelastic properties of processed cheese were studied using model samples containing 40% w/w dry matter and 45 and 50% w/w fat in dry matter. Experimental samples of processed cheese were evaluated after 14 days of storage at the temperature of 6 ± 2 °C. Basic parameters of processed cheese samples under study (i.e. their dry matter content and pH were not different (P ≥ 0.05. There were no statistically significant differences in values of storage modulus G´ [Pa], loss modulus G'' [Pa] and tangent of phase shift angle tan δ [-] for the reference frequency of 1 Hz between processed cheese with κ‑carrageenan applied in the form of powder and in the form of aqueous dispersion (P ≥ 0.05. The addition of 0.25% w/w κ‑carrageenan and ι‑carrageenan (in the powder form resulted in an increase in storage (G´ and loss (G'' moduli and a decrease in values of tan δ (P < 0.05. As compared with control (i.e. without added carrageenans, samples of processed cheese became firmer. Iota-carrageenan added in the powder form in concentration of 0.25% w/w showed a more intensive effect on the increase in firmness of processed cheese under study than κ‑carrageenan (P < 0.05.

  14. National HIV prevalence estimates for sub-Saharan Africa: controlling selection bias with Heckman-type selection models

    Science.gov (United States)

    Hogan, Daniel R; Salomon, Joshua A; Canning, David; Hammitt, James K; Zaslavsky, Alan M; Bärnighausen, Till

    2012-01-01

    Objectives Population-based HIV testing surveys have become central to deriving estimates of national HIV prevalence in sub-Saharan Africa. However, limited participation in these surveys can lead to selection bias. We control for selection bias in national HIV prevalence estimates using a novel approach, which unlike conventional imputation can account for selection on unobserved factors. Methods For 12 Demographic and Health Surveys conducted from 2001 to 2009 (N=138 300), we predict HIV status among those missing a valid HIV test with Heckman-type selection models, which allow for correlation between infection status and participation in survey HIV testing. We compare these estimates with conventional ones and introduce a simulation procedure that incorporates regression model parameter uncertainty into confidence intervals. Results Selection model point estimates of national HIV prevalence were greater than unadjusted estimates for 10 of 12 surveys for men and 11 of 12 surveys for women, and were also greater than the majority of estimates obtained from conventional imputation, with significantly higher HIV prevalence estimates for men in Cote d'Ivoire 2005, Mali 2006 and Zambia 2007. Accounting for selective non-participation yielded 95% confidence intervals around HIV prevalence estimates that are wider than those obtained with conventional imputation by an average factor of 4.5. Conclusions Our analysis indicates that national HIV prevalence estimates for many countries in sub-Saharan African are more uncertain than previously thought, and may be underestimated in several cases, underscoring the need for increasing participation in HIV surveys. Heckman-type selection models should be included in the set of tools used for routine estimation of HIV prevalence. PMID:23172342

  15. Decision support model for selecting and evaluating suppliers in the construction industry

    Directory of Open Access Journals (Sweden)

    Fernando Schramm

    2012-12-01

    Full Text Available A structured evaluation of the construction industry's suppliers, considering aspects which make their quality and credibility evident, can be a strategic tool to manage this specific supply chain. This study proposes a multi-criteria decision model for suppliers' selection from the construction industry, as well as an efficient evaluation procedure for the selected suppliers. The model is based on SMARTER (Simple Multi-Attribute Rating Technique Exploiting Ranking method and its main contribution is a new approach to structure the process of suppliers' selection, establishing explicit strategic policies on which the company management system relied to make the suppliers selection. This model was applied to a Civil Construction Company in Brazil and the main results demonstrate the efficiency of the proposed model. This study allowed the development of an approach to Construction Industry which was able to provide a better relationship among its managers, suppliers and partners.

  16. Can Process Understanding Help Elucidate The Structure Of The Critical Zone? Comparing Process-Based Soil Formation Models With Digital Soil Mapping.

    Science.gov (United States)

    Vanwalleghem, T.; Román, A.; Peña, A.; Laguna, A.; Giráldez, J. V.

    2017-12-01

    There is a need for better understanding the processes influencing soil formation and the resulting distribution of soil properties in the critical zone. Soil properties can exhibit strong spatial variation, even at the small catchment scale. Especially soil carbon pools in semi-arid, mountainous areas are highly uncertain because bulk density and stoniness are very heterogeneous and rarely measured explicitly. In this study, we explore the spatial variability in key soil properties (soil carbon stocks, stoniness, bulk density and soil depth) as a function of processes shaping the critical zone (weathering, erosion, soil water fluxes and vegetation patterns). We also compare the potential of traditional digital soil mapping versus a mechanistic soil formation model (MILESD) for predicting these key soil properties. Soil core samples were collected from 67 locations at 6 depths. Total soil organic carbon stocks were 4.38 kg m-2. Solar radiation proved to be the key variable controlling soil carbon distribution. Stone content was mostly controlled by slope, indicating the importance of erosion. Spatial distribution of bulk density was found to be highly random. Finally, total carbon stocks were predicted using a random forest model whose main covariates were solar radiation and NDVI. The model predicts carbon stocks that are double as high on north versus south-facing slopes. However, validation showed that these covariates only explained 25% of the variation in the dataset. Apparently, present-day landscape and vegetation properties are not sufficient to fully explain variability in the soil carbon stocks in this complex terrain under natural vegetation. This is attributed to a high spatial variability in bulk density and stoniness, key variables controlling carbon stocks. Similar results were obtained with the mechanistic soil formation model MILESD, suggesting that more complex models might be needed to further explore this high spatial variability.

  17. Comparative Study Of Two Non-Selective Cyclooxygenase ...

    African Journals Online (AJOL)

    The comparative study of the effects of two non-selective cyclooxygenase inhibitors ibuprofen and paracetamol on maternal and neonatal growth was conducted using 15 Sprague dawley rats, with mean body weight ranging between 165 and 179g. The rats were separated at random into three groups (A, B and C).

  18. Nonword Reading: Comparing Dual-Route Cascaded and Connectionist Dual-Process Models with Human Data

    Science.gov (United States)

    Pritchard, Stephen C.; Coltheart, Max; Palethorpe, Sallyanne; Castles, Anne

    2012-01-01

    Two prominent dual-route computational models of reading aloud are the dual-route cascaded (DRC) model, and the connectionist dual-process plus (CDP+) model. While sharing similarly designed lexical routes, the two models differ greatly in their respective nonlexical route architecture, such that they often differ on nonword pronunciation. Neither…

  19. Three Tier Unified Process Model for Requirement Negotiations and Stakeholder Collaborations

    Science.gov (United States)

    Niazi, Muhammad Ashraf Khan; Abbas, Muhammad; Shahzad, Muhammad

    2012-11-01

    This research paper is focused towards carrying out a pragmatic qualitative analysis of various models and approaches of requirements negotiations (a sub process of requirements management plan which is an output of scope managementís collect requirements process) and studies stakeholder collaborations methodologies (i.e. from within communication management knowledge area). Experiential analysis encompass two tiers; first tier refers to the weighted scoring model while second tier focuses on development of SWOT matrices on the basis of findings of weighted scoring model for selecting an appropriate requirements negotiation model. Finally the results are simulated with the help of statistical pie charts. On the basis of simulated results of prevalent models and approaches of negotiations, a unified approach for requirements negotiations and stakeholder collaborations is proposed where the collaboration methodologies are embeded into selected requirements negotiation model as internal parameters of the proposed process alongside some external required parameters like MBTI, opportunity analysis etc.

  20. Category-selective attention interacts with partial awareness processes in a continuous manner: An fMRI study

    Directory of Open Access Journals (Sweden)

    Shen Tu

    2015-12-01

    Full Text Available Recently, our team found that category-selective attention could modulate tool processing at the partial awareness level and unconscious face processing in the middle occipital gyrus (MOG. However, the modulation effects in MOG were in opposite directions across the masked tool and masked face conditions in that study: MOG activation decreased in the masked faces condition but increased in the masked tools condition under the consistent compared with the inconsistent cue-selective-attentional modulation. In the present study, in order to confirm that the opposite effects were due to the changed contours of the tools, using the same tool pictures and fMRI technique, we devised another two conditions: variant mirror tool picture condition and invariant tool picture condition. The results showed that, during the variant mirror tool picture condition, activation in the MOG decreased under tool-selective attention compared with face-selective attention. Interestingly, however, during the invariant tool picture condition, activation in the MOG revealed neither positive nor negative changes. Combined with the result of increased MOG activity in the changed different tool condition, the three different effects demonstrated not only that the unconscious component of partial awareness processing (no knowledge of the identity of the tool could be modulated by the category-selective attention in the earlier visual cortex but also that the modulation effect could further interact with the conscious component of partial awareness processing (consciousness of the changing contour of the tool in a continuous manner.

  1. Spatial Fleming-Viot models with selection and mutation

    CERN Document Server

    Dawson, Donald A

    2014-01-01

    This book constructs a rigorous framework for analysing selected phenomena in evolutionary theory of populations arising due to the combined effects of migration, selection and mutation in a spatial stochastic population model, namely the evolution towards fitter and fitter types through punctuated equilibria. The discussion is based on a number of new methods, in particular multiple scale analysis, nonlinear Markov processes and their entrance laws, atomic measure-valued evolutions and new forms of duality (for state-dependent mutation and multitype selection) which are used to prove ergodic theorems in this context and are applicable for many other questions and renormalization analysis for a variety of phenomena (stasis, punctuated equilibrium, failure of naive branching approximations, biodiversity) which occur due to the combination of rare mutation, mutation, resampling, migration and selection and make it necessary to mathematically bridge the gap (in the limit) between time and space scales.

  2. Comparative study on demographic-economic model-building for three selected countries of the ESCAP region.

    Science.gov (United States)

    1980-01-01

    The research project involves building models for 3 selected ESCAP countries, Indonesia, Japan, and the Republic of Korea, which are at different stages of demographic transition. This project involves country level research workd esigned, implemented, and monitored with the assistance of ESCAP. Accordingly the 1st Study Directors' Meeting was held in Bangkok during November 16-30, 1979 in a series of informal interactive working sessions for Study Directors, modelling experts, and resource persons. The participants were Study Directors from the above mentioned countries and a few experts from Malaysia, Thailand, ILO, UNRISD, and IBRD. The main objective of the meeting was to help finance the basic model framework in order that National Study Directors will be able to commence their modelling work after the Meeting. As evidenced by the Report of the 1st Study Directors' Meeting, this objective was achieved. Following this meeting, the 3 case studies are being simultaneously undertaken in countries by national study teams with technical support provided by ESCAP.

  3. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  4. Selecting university undergraduate student activities via compromised-analytical hierarchy process and 0-1 integer programming to maximize SETARA points

    Science.gov (United States)

    Nazri, Engku Muhammad; Yusof, Nur Ai'Syah; Ahmad, Norazura; Shariffuddin, Mohd Dino Khairri; Khan, Shazida Jan Mohd

    2017-11-01

    Prioritizing and making decisions on what student activities to be selected and conducted to fulfill the aspiration of a university as translated in its strategic plan must be executed with transparency and accountability. It is becoming even more crucial, particularly for universities in Malaysia with the recent budget cut imposed by the Malaysian government. In this paper, we illustrated how 0-1 integer programming (0-1 IP) model was implemented to select which activities among the forty activities proposed by the student body of Universiti Utara Malaysia (UUM) to be implemented for the 2017/2018 academic year. Two different models were constructed. The first model was developed to determine the minimum total budget that should be given to the student body by the UUM management to conduct all the activities that can fulfill the minimum targeted number of activities as stated in its strategic plan. On the other hand, the second model was developed to determine which activities to be selected based on the total budget already allocated beforehand by the UUM management towards fulfilling the requirements as set in its strategic plan. The selection of activities for the second model, was also based on the preference of the members of the student body whereby the preference value for each activity was determined using Compromised-Analytical Hierarchy Process. The outputs from both models were compared and discussed. The technique used in this study will be useful and suitable to be implemented by organizations with key performance indicator-oriented programs and having limited budget allocation issues.

  5. A comparative study of two communication models in HIV/AIDS coverage in selected Nigerian newspapers.

    Science.gov (United States)

    Okidu, Onjefu

    2013-01-30

    The current overriding thought in HIV/AIDS communication in developing countries is the need for a shift from the cognitive model, which emphasises the decision-making of the individual, to the activity model, which emphasises the context of the individual. In spite of the acknowledged media shift from the cognitive to the activity model in some developing countries, some HIV/AIDS communication scholars have felt otherwise. It was against this background that this study examined the content of some selected Nigerian newspapers to ascertain the attention paid to HIV/AIDS cognitive and activity information. Generally, the study found that Nigerian newspapers had shifted from the cognitive to the activity model of communication in their coverage of HIV/AIDS issues. The findings of the study seem inconsistent with the theoretical argument of some scholars that insufficient attention has been paid by mass media in developing countries to the activity model of HIV/AIDS communication. It is suggested that future research replicate the study for Nigerian and other developing countries' mass media.

  6. Comparative assessment of TRU waste forms and processes. Volume II. Waste form data, process descriptions, and costs

    International Nuclear Information System (INIS)

    Ross, W.A.; Lokken, R.O.; May, R.P.; Roberts, F.P.; Thornhill, R.E.; Timmerman, C.L.; Treat, R.L.; Westsik, J.H. Jr.

    1982-09-01

    This volume contains supporting information for the comparative assessment of the transuranic waste forms and processes summarized in Volume I. Detailed data on the characterization of the waste forms selected for the assessment, process descriptions, and cost information are provided. The purpose of this volume is to provide additional information that may be useful when using the data in Volume I and to provide greater detail on particular waste forms and processes. Volume II is divided into two sections and two appendixes. The first section provides information on the preparation of the waste form specimens used in this study and additional characterization data in support of that in Volume I. The second section includes detailed process descriptions for the eight processes evaluated. Appendix A lists the results of MCC-1 leach test and Appendix B lists additional cost data. 56 figures, 12 tables

  7. A decision model for the risk management of hazardous processes

    International Nuclear Information System (INIS)

    Holmberg, J.E.

    1997-03-01

    A decision model for risk management of hazardous processes as an optimisation problem of a point process is formulated in the study. In the approach, the decisions made by the management are divided into three categories: (1) planned process lifetime, (2) selection of the design and, (3) operational decisions. These three controlling methods play quite different roles in the practical risk management, which is also reflected in our approach. The optimisation of the process lifetime is related to the licensing problem of the process. It provides a boundary condition for a feasible utility function that is used as the actual objective function, i.e., maximizing the process lifetime utility. By design modifications, the management can affect the inherent accident hazard rate of the process. This is usually a discrete optimisation task. The study particularly concentrates upon the optimisation of the operational strategies given a certain design and licensing time. This is done by a dynamic risk model (marked point process model) representing the stochastic process of events observable or unobservable to the decision maker. An optimal long term control variable guiding the selection of operational alternatives in short term problems is studied. The optimisation problem is solved by the stochastic quasi-gradient procedure. The approach is illustrated by a case study. (23 refs.)

  8. Predictive and Descriptive CoMFA Models: The Effect of Variable Selection.

    Science.gov (United States)

    Sepehri, Bakhtyar; Omidikia, Nematollah; Kompany-Zareh, Mohsen; Ghavami, Raouf

    2018-01-01

    Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  9. Watershed Simulation of Nutrient Processes

    Science.gov (United States)

    In this presentation, nitrogen processes simulated in watershed models were reviewed and compared. Furthermore, current researches on nitrogen losses from agricultural fields were also reviewed. Finally, applications with those models were reviewed and selected successful and u...

  10. Geometry characteristics modeling and process optimization in coaxial laser inside wire cladding

    Science.gov (United States)

    Shi, Jianjun; Zhu, Ping; Fu, Geyan; Shi, Shihong

    2018-05-01

    Coaxial laser inside wire cladding method is very promising as it has a very high efficiency and a consistent interaction between the laser and wire. In this paper, the energy and mass conservation law, and the regression algorithm are used together for establishing the mathematical models to study the relationship between the layer geometry characteristics (width, height and cross section area) and process parameters (laser power, scanning velocity and wire feeding speed). At the selected parameter ranges, the predicted values from the models are compared with the experimental measured results, and there is minor error existing, but they reflect the same regularity. From the models, it is seen the width of the cladding layer is proportional to both the laser power and wire feeding speed, while it firstly increases and then decreases with the increasing of the scanning velocity. The height of the cladding layer is proportional to the scanning velocity and feeding speed and inversely proportional to the laser power. The cross section area increases with the increasing of feeding speed and decreasing of scanning velocity. By using the mathematical models, the geometry characteristics of the cladding layer can be predicted by the known process parameters. Conversely, the process parameters can be calculated by the targeted geometry characteristics. The models are also suitable for multi-layer forming process. By using the optimized process parameters calculated from the models, a 45 mm-high thin-wall part is formed with smooth side surfaces.

  11. Comparing Productivity Simulated with Inventory Data Using Different Modelling Technologies

    Science.gov (United States)

    Klopf, M.; Pietsch, S. A.; Hasenauer, H.

    2009-04-01

    The Lime Stone National Park in Austria was established in 1997 to protect sensible lime stone soils from degradation due to heavy forest management. Since 1997 the management activities were successively reduced and standing volume and coarse woody debris (CWD) increased and degraded soils began to recover. One option to study the rehabilitation process towards natural virgin forest state is the use of modelling technology. In this study we will test two different modelling approaches for their applicability to Lime Stone National Park. We will compare standing tree volume simulated resulting from (i) the individual tree growth model MOSES, and (ii) the species and management sensitive adaptation of the biogeochemical-mechanistic model Biome-BGC. The results from the two models are compared with filed observations form repeated permanent forest inventory plots of the Lime Stone National Park in Austria. The simulated CWD predictions of the BGC-model were compared with dead wood measurements (standing and lying dead wood) recorded at the permanent inventory plots. The inventory was established between 1994 and 1996 and remeasured from 2004 to 2005. For this analysis 40 plots of this inventory were selected which comprise the required dead wood components and are dominated by a single tree species. First we used the distance dependant individual tree growth model MOSES to derive the standing timber and the amount of mortality per hectare. MOSES is initialized with the inventory data at plot establishment and each sampling plot is treated as forest stand. The Biome-BGC is a process based biogeochemical model with extensions for Austrian tree species, a self initialization and a forest management tool. The initialization for the actual simulations with the BGC model was done as follows: We first used spin up runs to derive a balanced forest vegetation, similar to an undisturbed forest. Next we considered the management history of the past centuries (heavy clear cuts

  12. A comparative study of mixed exponential and Weibull distributions in a stochastic model replicating a tropical rainfall process

    Science.gov (United States)

    Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah

    2014-11-01

    A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.

  13. THE EFFECT OF INQUIRY TRAINING MODEL USE THE MEDIA PHET AGAINST SCIENCE PROCESS SKILLS AND LOGICAL THINKING SKILLS STUDENTS

    Directory of Open Access Journals (Sweden)

    Fajrul Wahdi Ginting

    2015-12-01

    Full Text Available The Purpose of The study: science process skills and logical thinking ability of students who use inquiry learning model training using PhET media; science process skills and logical thinking ability of students who use conventional learning model; and the difference science process skills and logical thinking ability of students to use learning model Inquiry Training using PhET media and conventional learning models. This research is a quasi experimental. Sample selection is done by cluster random sampling are two classes of classes VIII-E and class VIII-B, where the class VIII-E is taught by inquiry training model using media PhET and VIII-B with conventional learning model. The instrument used consisted of tests science process skills such as essay tests and tests of the ability to think logically in the form of multiple-choice tests. The data were analyzed using t test. The results showed that physics science process skills use Inquiry Training models using PhET media is different and showed better results compared with conventional learning model, and logical thinking skills students use Inquiry Training model using PhET media is different and show better results compared with conventional learning, and there is a difference between the ability to think logically and science process skills of students who use Inquiry Training model using PhET media and conventional learning models.

  14. Expatriates Selection: An Essay of Model Analysis

    Directory of Open Access Journals (Sweden)

    Rui Bártolo-Ribeiro

    2015-03-01

    Full Text Available The business expansion to other geographical areas with different cultures from which organizations were created and developed leads to the expatriation of employees to these destinations. Recruitment and selection procedures of expatriates do not always have the intended success leading to an early return of these professionals with the consequent organizational disorders. In this study, several articles published in the last five years were analyzed in order to identify the most frequently mentioned dimensions in the selection of expatriates in terms of success and failure. The characteristics in the selection process that may increase prediction of adaptation of expatriates to new cultural contexts of the some organization were studied according to the KSAOs model. Few references were found concerning Knowledge, Skills and Abilities dimensions in the analyzed papers. There was a strong predominance on the evaluation of Other Characteristics, and was given more importance to dispositional factors than situational factors for promoting the integration of the expatriates.

  15. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    Science.gov (United States)

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a

  16. Comparing the Selected Transfer Functions and Local Optimization Methods for Neural Network Flood Runoff Forecast

    Directory of Open Access Journals (Sweden)

    Petr Maca

    2014-01-01

    Full Text Available The presented paper aims to analyze the influence of the selection of transfer function and training algorithms on neural network flood runoff forecast. Nine of the most significant flood events, caused by the extreme rainfall, were selected from 10 years of measurement on small headwater catchment in the Czech Republic, and flood runoff forecast was investigated using the extensive set of multilayer perceptrons with one hidden layer of neurons. The analyzed artificial neural network models with 11 different activation functions in hidden layer were trained using 7 local optimization algorithms. The results show that the Levenberg-Marquardt algorithm was superior compared to the remaining tested local optimization methods. When comparing the 11 nonlinear transfer functions, used in hidden layer neurons, the RootSig function was superior compared to the rest of analyzed activation functions.

  17. Generation unit selection via capital asset pricing model for generation planning

    Energy Technology Data Exchange (ETDEWEB)

    Romy Cahyadi; K. Jo Min; Chung-Hsiao Wang; Nick Abi-Samra [College of Engineering, Ames, IA (USA)

    2003-11-01

    The USA's electric power industry is undergoing substantial regulatory and organizational changes. Such changes introduce substantial financial risk in generation planning. In order to incorporate the financial risk into the capital investment decision process of generation planning, this paper develops and analyses a generation unit selection process via the capital asset pricing model (CAPM). In particular, utilizing realistic data on gas-fired, coal-fired, and wind power generation units, the authors show which and how concrete steps can be taken for generation planning purposes. It is hoped that the generation unit selection process will help utilities in the area of effective and efficient generation planning when financial risks are considered. 20 refs., 14 tabs.

  18. Forest Fragmentation and Selective Logging Have Inconsistent Effects on Multiple Animal-Mediated Ecosystem Processes in a Tropical Forest

    Science.gov (United States)

    Schleuning, Matthias; Farwig, Nina; Peters, Marcell K.; Bergsdorf, Thomas; Bleher, Bärbel; Brandl, Roland; Dalitz, Helmut; Fischer, Georg; Freund, Wolfram; Gikungu, Mary W.; Hagen, Melanie; Garcia, Francisco Hita; Kagezi, Godfrey H.; Kaib, Manfred; Kraemer, Manfred; Lung, Tobias; Schaab, Gertrud; Templin, Mathias; Uster, Dana; Wägele, J. Wolfgang; Böhning-Gaese, Katrin

    2011-01-01

    Forest fragmentation and selective logging are two main drivers of global environmental change and modify biodiversity and environmental conditions in many tropical forests. The consequences of these changes for the functioning of tropical forest ecosystems have rarely been explored in a comprehensive approach. In a Kenyan rainforest, we studied six animal-mediated ecosystem processes and recorded species richness and community composition of all animal taxa involved in these processes. We used linear models and a formal meta-analysis to test whether forest fragmentation and selective logging affected ecosystem processes and biodiversity and used structural equation models to disentangle direct from biodiversity-related indirect effects of human disturbance on multiple ecosystem processes. Fragmentation increased decomposition and reduced antbird predation, while selective logging consistently increased pollination, seed dispersal and army-ant raiding. Fragmentation modified species richness or community composition of five taxa, whereas selective logging did not affect any component of biodiversity. Changes in the abundance of functionally important species were related to lower predation by antbirds and higher decomposition rates in small forest fragments. The positive effects of selective logging on bee pollination, bird seed dispersal and army-ant raiding were direct, i.e. not related to changes in biodiversity, and were probably due to behavioural changes of these highly mobile animal taxa. We conclude that animal-mediated ecosystem processes respond in distinct ways to different types of human disturbance in Kakamega Forest. Our findings suggest that forest fragmentation affects ecosystem processes indirectly by changes in biodiversity, whereas selective logging influences processes directly by modifying local environmental conditions and resource distributions. The positive to neutral effects of selective logging on ecosystem processes show that the

  19. Decision making model design for antivirus software selection using Factor Analysis and Analytical Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Nurhayati Ai

    2018-01-01

    Full Text Available Virus spread increase significantly through the internet in 2017. One of the protection method is using antivirus software. The wide variety of antivirus software in the market tends to creating confusion among consumer. Selecting the right antivirus according to their needs has become difficult. This is the reason we conduct our research. We formulate a decision making model for antivirus software consumer. The model is constructed by using factor analysis and AHP method. First we spread questionnaires to consumer, then from those questionnaires we identified 16 variables that needs to be considered on selecting antivirus software. This 16 variables then divided into 5 factors by using factor analysis method in SPSS software. These five factors are security, performance, internal, time and capacity. To rank those factors we spread questionnaires to 6 IT expert then the data is analyzed using AHP method. The result is that performance factors gained the highest rank from all of the other factors. Thus, consumer can select antivirus software by judging the variables in the performance factors. Those variables are software loading speed, user friendly, no excessive memory use, thorough scanning, and scanning virus fast and accurately.

  20. Examining speed versus selection in connectivity models using elk migration as an example

    Science.gov (United States)

    Brennan, Angela; Hanks, Ephraim M.; Merkle, Jerod A.; Cole, Eric K.; Dewey, Sarah R.; Courtemanch, Alyson B.; Cross, Paul C.

    2018-01-01

    ContextLandscape resistance is vital to connectivity modeling and frequently derived from resource selection functions (RSFs). RSFs estimate relative probability of use and tend to focus on understanding habitat preferences during slow, routine animal movements (e.g., foraging). Dispersal and migration, however, can produce rarer, faster movements, in which case models of movement speed rather than resource selection may be more realistic for identifying habitats that facilitate connectivity.ObjectiveTo compare two connectivity modeling approaches applied to resistance estimated from models of movement rate and resource selection.MethodsUsing movement data from migrating elk, we evaluated continuous time Markov chain (CTMC) and movement-based RSF models (i.e., step selection functions [SSFs]). We applied circuit theory and shortest random path (SRP) algorithms to CTMC, SSF and null (i.e., flat) resistance surfaces to predict corridors between elk seasonal ranges. We evaluated prediction accuracy by comparing model predictions to empirical elk movements.ResultsAll connectivity models predicted elk movements well, but models applied to CTMC resistance were more accurate than models applied to SSF and null resistance. Circuit theory models were more accurate on average than SRP models.ConclusionsCTMC can be more realistic than SSFs for estimating resistance for fast movements, though SSFs may demonstrate some predictive ability when animals also move slowly through corridors (e.g., stopover use during migration). High null model accuracy suggests seasonal range data may also be critical for predicting direct migration routes. For animals that migrate or disperse across large landscapes, we recommend incorporating CTMC into the connectivity modeling toolkit.

  1. Process modelling on a canonical basis[Process modelling; Canonical modelling

    Energy Technology Data Exchange (ETDEWEB)

    Siepmann, Volker

    2006-12-20

    Based on an equation oriented solving strategy, this thesis investigates a new approach to process modelling. Homogeneous thermodynamic state functions represent consistent mathematical models of thermodynamic properties. Such state functions of solely extensive canonical state variables are the basis of this work, as they are natural objective functions in optimisation nodes to calculate thermodynamic equilibrium regarding phase-interaction and chemical reactions. Analytical state function derivatives are utilised within the solution process as well as interpreted as physical properties. By this approach, only a limited range of imaginable process constraints are considered, namely linear balance equations of state variables. A second-order update of source contributions to these balance equations is obtained by an additional constitutive equation system. These equations are general dependent on state variables and first-order sensitivities, and cover therefore practically all potential process constraints. Symbolic computation technology efficiently provides sparsity and derivative information of active equations to avoid performance problems regarding robustness and computational effort. A benefit of detaching the constitutive equation system is that the structure of the main equation system remains unaffected by these constraints, and a priori information allows to implement an efficient solving strategy and a concise error diagnosis. A tailor-made linear algebra library handles the sparse recursive block structures efficiently. The optimisation principle for single modules of thermodynamic equilibrium is extended to host entire process models. State variables of different modules interact through balance equations, representing material flows from one module to the other. To account for reusability and encapsulation of process module details, modular process modelling is supported by a recursive module structure. The second-order solving algorithm makes it

  2. The Ideal Criteria of Supplier Selection for SMEs Food Processing Industry

    Directory of Open Access Journals (Sweden)

    Ramlan Rohaizan

    2016-01-01

    Full Text Available Selection of good supplier is important to determine the performance and profitability of SMEs food processing industry. The lack of managerial capability on supplier selection in SMEs food processing industry affects the competitiveness of SMEs food processing industry. This research aims to determine the ideal criteria of supplier for food processing industry using Analytical Hierarchy Process (AHP. The research was carried out in a quantitative method by distributing questionnaires to 50 SMEs food processing industries. The collected data analysed using Expert Choice software to rank the supplier selection criteria. The result shows that criteria for supplier selection are ranked by cost, quality, service, delivery and management and organisation while purchase cost, audit result, defect analysis, transportation cost and fast responsiveness are the first five sub-criteria. The result of this research intends to improve managerial capabilities of SMEs food processing industry in supplier selection.

  3. Ranking of Business Process Simulation Software Tools with DEX/QQ Hierarchical Decision Model.

    Science.gov (United States)

    Damij, Nadja; Boškoski, Pavle; Bohanec, Marko; Mileva Boshkoska, Biljana

    2016-01-01

    The omnipresent need for optimisation requires constant improvements of companies' business processes (BPs). Minimising the risk of inappropriate BP being implemented is usually performed by simulating the newly developed BP under various initial conditions and "what-if" scenarios. An effectual business process simulations software (BPSS) is a prerequisite for accurate analysis of an BP. Characterisation of an BPSS tool is a challenging task due to the complex selection criteria that includes quality of visual aspects, simulation capabilities, statistical facilities, quality reporting etc. Under such circumstances, making an optimal decision is challenging. Therefore, various decision support models are employed aiding the BPSS tool selection. The currently established decision support models are either proprietary or comprise only a limited subset of criteria, which affects their accuracy. Addressing this issue, this paper proposes a new hierarchical decision support model for ranking of BPSS based on their technical characteristics by employing DEX and qualitative to quantitative (QQ) methodology. Consequently, the decision expert feeds the required information in a systematic and user friendly manner. There are three significant contributions of the proposed approach. Firstly, the proposed hierarchical model is easily extendible for adding new criteria in the hierarchical structure. Secondly, a fully operational decision support system (DSS) tool that implements the proposed hierarchical model is presented. Finally, the effectiveness of the proposed hierarchical model is assessed by comparing the resulting rankings of BPSS with respect to currently available results.

  4. A Multi-Process Test Case to Perform Comparative Analysis of Coastal Oceanic Models

    Science.gov (United States)

    Lemarié, F.; Burchard, H.; Knut, K.; Debreu, L.

    2016-12-01

    Due to the wide variety of choices that need to be made during the development of dynamical kernels of oceanic models, there is a strong need for an effective and objective assessment of the various methods and approaches that predominate in the community. We present here an idealized multi-scale scenario for coastal ocean models combining estuarine, coastal and shelf sea scales at midlatitude. The bathymetry, initial conditions and external forcings are defined analytically so that any model developer or user could reproduce the test case with its own numerical code. Thermally stratified conditions are prescribed and a tidal forcing is imposed as a propagating coastal Kelvin wave. The following physical processes can be assessed from the model results: estuarine process driven by tides and buoyancy gradients, the river plume dynamics, tidal fronts, and the interaction between tides and inertial oscillations. We show results obtained using the GETM (General Estuarine Transport Model) and the CROCO (Coastal and Regional Ocean Community model) models. Those two models are representative of the diversity of numerical methods in use in coastal models: GETM is based on a quasi-lagrangian vertical coordinate, a coupled space-time approach for advective terms, a TVD (Total Variation Diminishing) tracer advection scheme while CROCO is discretized with a quasi-eulerian vertical coordinate, a method of lines is used for advective terms, and tracer advection satisfies the TVB (Total Variation Bounded) property. The multiple scales are properly resolved thanks to nesting strategies, 1-way nesting for GETM and 2-way nesting for CROCO. Such test case can be an interesting experiment to continue research in numerical approaches as well as an efficient tool to allow intercomparison between structured-grid and unstructured-grid approaches. Reference : Burchard, H., Debreu, L., Klingbeil, K., Lemarié, F. : The numerics of hydrostatic structured-grid coastal ocean models: state of

  5. Age-related decline in bottom-up processing and selective attention in the very old.

    Science.gov (United States)

    Zhuravleva, Tatyana Y; Alperin, Brittany R; Haring, Anna E; Rentz, Dorene M; Holcomb, Philip J; Daffner, Kirk R

    2014-06-01

    Previous research demonstrating age-related deficits in selective attention have not included old-old adults, an increasingly important group to study. The current investigation compared event-related potentials in 15 young-old (65-79 years old) and 23 old-old (80-99 years old) subjects during a color-selective attention task. Subjects responded to target letters in a specified color (Attend) while ignoring letters in a different color (Ignore) under both low and high loads. There were no group differences in visual acuity, accuracy, reaction time, or latency of early event-related potential components. The old-old group showed a disruption in bottom-up processing, indexed by a substantially diminished posterior N1 (smaller amplitude). They also demonstrated markedly decreased modulation of bottom-up processing based on selected visual features, indexed by the posterior selection negativity (SN), with similar attenuation under both loads. In contrast, there were no group differences in frontally mediated attentional selection, measured by the anterior selection positivity (SP). There was a robust inverse relationship between the size of the SN and SP (the smaller the SN, the larger the SP), which may represent an anteriorly supported compensatory mechanism. In the absence of a decline in top-down modulation indexed by the SP, the diminished SN may reflect age-related degradation of early bottom-up visual processing in old-old adults.

  6. Comparative process mining in education : an approach based on process cubes

    NARCIS (Netherlands)

    van der Aalst, W.M.P.; Guo, S.; Gorissen, P.J.B.; Ceravolo, P.; Accorsi, R.; Cudre-Mauroux, P.

    2015-01-01

    Process mining techniques enable the analysis of a wide variety of processes using event data. For example, event logs can be used to automatically learn a process model (e.g., a Petri net or BPMN model). Next to the automated discovery of the real underlying process, there are process mining

  7. A multi-component evaporation model for beam melting processes

    Science.gov (United States)

    Klassen, Alexander; Forster, Vera E.; Körner, Carolin

    2017-02-01

    In additive manufacturing using laser or electron beam melting technologies, evaporation losses and changes in chemical composition are known issues when processing alloys with volatile elements. In this paper, a recently described numerical model based on a two-dimensional free surface lattice Boltzmann method is further developed to incorporate the effects of multi-component evaporation. The model takes into account the local melt pool composition during heating and fusion of metal powder. For validation, the titanium alloy Ti-6Al-4V is melted by selective electron beam melting and analysed using mass loss measurements and high-resolution microprobe imaging. Numerically determined evaporation losses and spatial distributions of aluminium compare well with experimental data. Predictions of the melt pool formation in bulk samples provide insight into the competition between the loss of volatile alloying elements from the irradiated surface and their advective redistribution within the molten region.

  8. Radial Domany-Kinzel models with mutation and selection

    Science.gov (United States)

    Lavrentovich, Maxim O.; Korolev, Kirill S.; Nelson, David R.

    2013-01-01

    We study the effect of spatial structure, genetic drift, mutation, and selective pressure on the evolutionary dynamics in a simplified model of asexual organisms colonizing a new territory. Under an appropriate coarse-graining, the evolutionary dynamics is related to the directed percolation processes that arise in voter models, the Domany-Kinzel (DK) model, contact process, and so on. We explore the differences between linear (flat front) expansions and the much less familiar radial (curved front) range expansions. For the radial expansion, we develop a generalized, off-lattice DK model that minimizes otherwise persistent lattice artifacts. With both simulations and analytical techniques, we study the survival probability of advantageous mutants, the spatial correlations between domains of neutral strains, and the dynamics of populations with deleterious mutations. “Inflation” at the frontier leads to striking differences between radial and linear expansions. For a colony with initial radius R0 expanding at velocity v, significant genetic demixing, caused by local genetic drift, occurs only up to a finite time t*=R0/v, after which portions of the colony become causally disconnected due to the inflating perimeter of the expanding front. As a result, the effect of a selective advantage is amplified relative to genetic drift, increasing the survival probability of advantageous mutants. Inflation also modifies the underlying directed percolation transition, introducing novel scaling functions and modifications similar to a finite-size effect. Finally, we consider radial range expansions with deflating perimeters, as might arise from colonization initiated along the shores of an island.

  9. Selection of models to calculate the LLW source term

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab

  10. Comparative analysis of numerical models of pipe handling equipment used in offshore drilling applications

    Energy Technology Data Exchange (ETDEWEB)

    Pawlus, Witold, E-mail: witold.p.pawlus@ieee.org; Ebbesen, Morten K.; Hansen, Michael R.; Choux, Martin; Hovland, Geir [Department of Engineering Sciences, University of Agder, PO Box 509, N-4898 Grimstad (Norway)

    2016-06-08

    Design of offshore drilling equipment is a task that involves not only analysis of strict machine specifications and safety requirements but also consideration of changeable weather conditions and harsh environment. These challenges call for a multidisciplinary approach and make the design process complex. Various modeling software products are currently available to aid design engineers in their effort to test and redesign equipment before it is manufactured. However, given the number of available modeling tools and methods, the choice of the proper modeling methodology becomes not obvious and – in some cases – troublesome. Therefore, we present a comparative analysis of two popular approaches used in modeling and simulation of mechanical systems: multibody and analytical modeling. A gripper arm of the offshore vertical pipe handling machine is selected as a case study for which both models are created. In contrast to some other works, the current paper shows verification of both systems by benchmarking their simulation results against each other. Such criteria as modeling effort and results accuracy are evaluated to assess which modeling strategy is the most suitable given its eventual application.

  11. Hydraulic head interpolation using ANFIS—model selection and sensitivity analysis

    Science.gov (United States)

    Kurtulus, Bedri; Flipo, Nicolas

    2012-01-01

    The aim of this study is to investigate the efficiency of ANFIS (adaptive neuro fuzzy inference system) for interpolating hydraulic head in a 40-km 2 agricultural watershed of the Seine basin (France). Inputs of ANFIS are Cartesian coordinates and the elevation of the ground. Hydraulic head was measured at 73 locations during a snapshot campaign on September 2009, which characterizes low-water-flow regime in the aquifer unit. The dataset was then split into three subsets using a square-based selection method: a calibration one (55%), a training one (27%), and a test one (18%). First, a method is proposed to select the best ANFIS model, which corresponds to a sensitivity analysis of ANFIS to the type and number of membership functions (MF). Triangular, Gaussian, general bell, and spline-based MF are used with 2, 3, 4, and 5 MF per input node. Performance criteria on the test subset are used to select the 5 best ANFIS models among 16. Then each is used to interpolate the hydraulic head distribution on a (50×50)-m grid, which is compared to the soil elevation. The cells where the hydraulic head is higher than the soil elevation are counted as "error cells." The ANFIS model that exhibits the less "error cells" is selected as the best ANFIS model. The best model selection reveals that ANFIS models are very sensitive to the type and number of MF. Finally, a sensibility analysis of the best ANFIS model with four triangular MF is performed on the interpolation grid, which shows that ANFIS remains stable to error propagation with a higher sensitivity to soil elevation.

  12. Pupil Dilation and EEG Alpha Frequency Band Power Reveal Load on Executive Functions for Link-Selection Processes during Text Reading.

    Directory of Open Access Journals (Sweden)

    Christian Scharinger

    Full Text Available Executive working memory functions play a central role in reading comprehension. In the present research we were interested in additional load imposed on executive functions by link-selection processes during computer-based reading. For obtaining process measures, we used a methodology of concurrent electroencephalographic (EEG and eye-tracking data recording that allowed us to compare epochs of pure text reading with epochs of hyperlink-like selection processes in an online reading situation. Furthermore, this methodology allowed us to directly compare the two physiological load-measures EEG alpha frequency band power and pupil dilation. We observed increased load on executive functions during hyperlink-like selection processes on both measures in terms of decreased alpha frequency band power and increased pupil dilation. Surprisingly however, the two measures did not correlate. Two additional experiments were conducted that excluded potential perceptual, motor, or structural confounds. In sum, EEG alpha frequency band power and pupil dilation both turned out to be sensitive measures for increased load during hyperlink-like selection processes in online text reading.

  13. Hencky's model for elastomer forming process

    Science.gov (United States)

    Oleinikov, A. A.; Oleinikov, A. I.

    2016-08-01

    In the numerical simulation of elastomer forming process, Henckys isotropic hyperelastic material model can guarantee relatively accurate prediction of strain range in terms of large deformations. It is shown, that this material model prolongate Hooke's law from the area of infinitesimal strains to the area of moderate ones. New representation of the fourth-order elasticity tensor for Hencky's hyperelastic isotropic material is obtained, it possesses both minor symmetries, and the major symmetry. Constitutive relations of considered model is implemented into MSC.Marc code. By calculating and fitting curves, the polyurethane elastomer material constants are selected. Simulation of equipment for elastomer sheet forming are considered.

  14. A Collective Case Study of Secondary Students' Model-Based Inquiry on Natural Selection through Programming in an Agent-Based Modeling Environment

    Science.gov (United States)

    Xiang, Lin

    This is a collective case study seeking to develop detailed descriptions of how programming an agent-based simulation influences a group of 8 th grade students' model-based inquiry (MBI) by examining students' agent-based programmable modeling (ABPM) processes and the learning outcomes. The context of the present study was a biology unit on natural selection implemented in a charter school of a major California city during spring semester of 2009. Eight 8th grade students, two boys and six girls, participated in this study. All of them were low socioeconomic status (SES). English was a second language for all of them, but they had been identified as fluent English speakers at least a year before the study. None of them had learned either natural selection or programming before the study. The study spanned over 7 weeks and was comprised of two study phases. In phase one the subject students learned natural selection in science classroom and how to do programming in NetLogo, an ABPM tool, in a computer lab; in phase two, the subject students were asked to program a simulation of adaptation based on the natural selection model in NetLogo. Both qualitative and quantitative data were collected in this study. The data resources included (1) pre and post test questionnaire, (2) student in-class worksheet, (3) programming planning sheet, (4) code-conception matching sheet, (5) student NetLogo projects, (6) videotaped programming processes, (7) final interview, and (8) investigator's field notes. Both qualitative and quantitative approaches were applied to analyze the gathered data. The findings suggested that students made progress on understanding adaptation phenomena and natural selection at the end of ABPM-supported MBI learning but the progress was limited. These students still held some misconceptions in their conceptual models, such as the idea that animals need to "learn" to adapt into the environment. Besides, their models of natural selection appeared to be

  15. Thermal versus high pressure processing of carrots: A comparative pilot-scale study on equivalent basis

    NARCIS (Netherlands)

    Vervoort, L.; Plancken, Van der L.; Grauwet, T.; Verlinde, P.; Matser, A.M.; Hendrickx, M.; Loey, van A.

    2012-01-01

    This report describes the first study comparing different high pressure (HP) and thermal treatments at intensities ranging from mild pasteurization to sterilization conditions. To allow a fair comparison, the processing conditions were selected based on the principles of equivalence. Moreover,

  16. Processes in arithmetic strategy selection: A fMRI study.

    Directory of Open Access Journals (Sweden)

    Julien eTaillan

    2015-02-01

    Full Text Available This neuroimaging (fMRI study investigated neural correlates of strategy selection. Young adults performed an arithmetic task in two different conditions. In both conditions, participants had to provide estimates of two-digit multiplication problems like 54 x 78. In the choice condition, participants had to select the better of two available rounding strategies, rounding-up strategy (RU (i.e., doing 60x80 = 4,800 or rounding-down strategy (RD (i.e., doing 50x70=3,500 to estimate product of 54x78. In the no-choice condition, participants did not have to select strategy on each problem but were told which strategy to use; they executed RU and RD strategies each on a series of problems. Participants also had a control task (i.e., providing correct products of multiplication problems like 40x50. Brain activations and performance were analyzed as a function of these conditions. Participants were able to frequently choose the better strategy in the choice condition; they were also slower when they executed the difficult RU than the easier RD. Neuroimaging data showed greater brain activations in right anterior cingulate cortex (ACC, dorso-lateral prefrontal cortex (DLPFC, and angular gyrus (ANG, when selecting (relative to executing the better strategy on each problem. Moreover, RU was associated with more parietal cortex activation than RD. These results suggest an important role of fronto-parietal network in strategy selection and have important implications for our further understanding and modelling cognitive processes underlying strategy selection.

  17. Processes in arithmetic strategy selection: a fMRI study.

    Science.gov (United States)

    Taillan, Julien; Ardiale, Eléonore; Anton, Jean-Luc; Nazarian, Bruno; Félician, Olivier; Lemaire, Patrick

    2015-01-01

    This neuroimaging (functional magnetic resonance imaging) study investigated neural correlates of strategy selection. Young adults performed an arithmetic task in two different conditions. In both conditions, participants had to provide estimates of two-digit multiplication problems like 54 × 78. In the choice condition, participants had to select the better of two available rounding strategies, rounding-up (RU) strategy (i.e., doing 60 × 80 = 4,800) or rounding-down (RD) strategy (i.e., doing 50 × 70 = 3,500 to estimate product of 54 × 78). In the no-choice condition, participants did not have to select strategy on each problem but were told which strategy to use; they executed RU and RD strategies each on a series of problems. Participants also had a control task (i.e., providing correct products of multiplication problems like 40 × 50). Brain activations and performance were analyzed as a function of these conditions. Participants were able to frequently choose the better strategy in the choice condition; they were also slower when they executed the difficult RU than the easier RD. Neuroimaging data showed greater brain activations in right anterior cingulate cortex (ACC), dorso-lateral prefrontal cortex (DLPFC), and angular gyrus (ANG), when selecting (relative to executing) the better strategy on each problem. Moreover, RU was associated with more parietal cortex activation than RD. These results suggest an important role of fronto-parietal network in strategy selection and have important implications for our further understanding and modeling cognitive processes underlying strategy selection.

  18. Engineered Barrier System Degradation, Flow, and Transport Process Model Report

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2000-07-17

    The Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is one of nine PMRs supporting the Total System Performance Assessment (TSPA) being developed by the Yucca Mountain Project for the Site Recommendation Report (SRR). The EBS PMR summarizes the development and abstraction of models for processes that govern the evolution of conditions within the emplacement drifts of a potential high-level nuclear waste repository at Yucca Mountain, Nye County, Nevada. Details of these individual models are documented in 23 supporting Analysis/Model Reports (AMRs). Nineteen of these AMRs are for process models, and the remaining 4 describe the abstraction of results for application in TSPA. The process models themselves cluster around four major topics: ''Water Distribution and Removal Model, Physical and Chemical Environment Model, Radionuclide Transport Model, and Multiscale Thermohydrologic Model''. One AMR (Engineered Barrier System-Features, Events, and Processes/Degradation Modes Analysis) summarizes the formal screening analysis used to select the Features, Events, and Processes (FEPs) included in TSPA and those excluded from further consideration. Performance of a potential Yucca Mountain high-level radioactive waste repository depends on both the natural barrier system (NBS) and the engineered barrier system (EBS) and on their interactions. Although the waste packages are generally considered as components of the EBS, the EBS as defined in the EBS PMR includes all engineered components outside the waste packages. The principal function of the EBS is to complement the geologic system in limiting the amount of water contacting nuclear waste. A number of alternatives were considered by the Project for different EBS designs that could provide better performance than the design analyzed for the Viability Assessment. The design concept selected was Enhanced Design Alternative II (EDA II).

  19. Engineered Barrier System Degradation, Flow, and Transport Process Model Report

    International Nuclear Information System (INIS)

    E.L. Hardin

    2000-01-01

    The Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is one of nine PMRs supporting the Total System Performance Assessment (TSPA) being developed by the Yucca Mountain Project for the Site Recommendation Report (SRR). The EBS PMR summarizes the development and abstraction of models for processes that govern the evolution of conditions within the emplacement drifts of a potential high-level nuclear waste repository at Yucca Mountain, Nye County, Nevada. Details of these individual models are documented in 23 supporting Analysis/Model Reports (AMRs). Nineteen of these AMRs are for process models, and the remaining 4 describe the abstraction of results for application in TSPA. The process models themselves cluster around four major topics: ''Water Distribution and Removal Model, Physical and Chemical Environment Model, Radionuclide Transport Model, and Multiscale Thermohydrologic Model''. One AMR (Engineered Barrier System-Features, Events, and Processes/Degradation Modes Analysis) summarizes the formal screening analysis used to select the Features, Events, and Processes (FEPs) included in TSPA and those excluded from further consideration. Performance of a potential Yucca Mountain high-level radioactive waste repository depends on both the natural barrier system (NBS) and the engineered barrier system (EBS) and on their interactions. Although the waste packages are generally considered as components of the EBS, the EBS as defined in the EBS PMR includes all engineered components outside the waste packages. The principal function of the EBS is to complement the geologic system in limiting the amount of water contacting nuclear waste. A number of alternatives were considered by the Project for different EBS designs that could provide better performance than the design analyzed for the Viability Assessment. The design concept selected was Enhanced Design Alternative II (EDA II)

  20. Physician-patient argumentation and communication, comparing Toulmin's model, pragma-dialectics, and American sociolinguistics.

    Science.gov (United States)

    Rivera, Francisco Javier Uribe; Artmann, Elizabeth

    2015-12-01

    This article discusses the application of theories of argumentation and communication to the field of medicine. Based on a literature review, the authors compare Toulmin's model, pragma-dialectics, and the work of Todd and Fisher, derived from American sociolinguistics. These approaches were selected because they belong to the pragmatic field of language. The main results were: pragma-dialectics characterizes medical reasoning more comprehensively, highlighting specific elements of the three disciplines of argumentation: dialectics, rhetoric, and logic; Toulmin's model helps substantiate the declaration of diagnostic and therapeutic hypotheses, and as part of an interpretive medicine, approximates the pragma-dialectical approach by including dialectical elements in the process of formulating arguments; Fisher and Todd's approach allows characterizing, from a pragmatic analysis of speech acts, the degree of symmetry/asymmetry in the doctor-patient relationship, while arguing the possibility of negotiating treatment alternatives.

  1. Comprehensive School Reform Models: A Study Guide for Comparing CSR Models (and How Well They Meet Minnesota's Learning Standards).

    Science.gov (United States)

    St. John, Edward P.; Loescher, Siri; Jacob, Stacy; Cekic, Osman; Kupersmith, Leigh; Musoba, Glenda Droogsma

    A growing number of schools are exploring the prospect of applying for funding to implement a Comprehensive School Reform (CSR) model. But the process of selecting a CSR model can be complicated because it frequently involves self-study and a review of models to determine which models best meet the needs of the school. This study guide is intended…

  2. Analytic hierarchy process helps select site for limestone quarry expansion in Barbados.

    Science.gov (United States)

    Dey, Prasanta Kumar; Ramcharan, Eugene K

    2008-09-01

    Site selection is a key activity for quarry expansion to support cement production, and is governed by factors such as resource availability, logistics, costs, and socio-economic-environmental factors. Adequate consideration of all the factors facilitates both industrial productivity and sustainable economic growth. This study illustrates the site selection process that was undertaken for the expansion of limestone quarry operations to support cement production in Barbados. First, alternate sites with adequate resources to support a 25-year development horizon were identified. Second, technical and socio-economic-environmental factors were then identified. Third, a database was developed for each site with respect to each factor. Fourth, a hierarchical model in analytic hierarchy process (AHP) framework was then developed. Fifth, the relative ranking of the alternate sites was then derived through pair wise comparison in all the levels and through subsequent synthesizing of the results across the hierarchy through computer software (Expert Choice). The study reveals that an integrated framework using the AHP can help select a site for the quarry expansion project in Barbados.

  3. A Comparative Assessment of Aerodynamic Models for Buffeting and Flutter of Long-Span Bridges

    Directory of Open Access Journals (Sweden)

    Igor Kavrakov

    2017-12-01

    Full Text Available Wind-induced vibrations commonly represent the leading criterion in the design of long-span bridges. The aerodynamic forces in bridge aerodynamics are mainly based on the quasi-steady and linear unsteady theory. This paper aims to investigate different formulations of self-excited and buffeting forces in the time domain by comparing the dynamic response of a multi-span cable-stayed bridge during the critical erection condition. The bridge is selected to represent a typical reference object with a bluff concrete box girder for large river crossings. The models are viewed from a perspective of model complexity, comparing the influence of the aerodynamic properties implied in the aerodynamic models, such as aerodynamic damping and stiffness, fluid memory in the buffeting and self-excited forces, aerodynamic nonlinearity, and aerodynamic coupling on the bridge response. The selected models are studied for a wind-speed range that is typical for the construction stage for two levels of turbulence intensity. Furthermore, a simplified method for the computation of buffeting forces including the aerodynamic admittance is presented, in which rational approximation is avoided. The critical flutter velocities are also compared for the selected models under laminar flow. Keywords: Buffeting, Flutter, Long-span bridges, Bridge aerodynamics, Bridge aeroelasticity, Erection stage

  4. Generation unit selection via capital asset pricing model for generation planning

    Energy Technology Data Exchange (ETDEWEB)

    Cahyadi, Romy; Jo Min, K. [College of Engineering, Ames, IA (United States); Chunghsiao Wang [LG and E Energy Corp., Louisville, KY (United States); Abi-Samra, Nick [Electric Power Research Inst., Palo Alto, CA (United States)

    2003-07-01

    The electric power industry in many parts of U.S.A. is undergoing substantial regulatory and organizational changes. Such changes introduce substantial financial risk in generation planning. In order to incorporate the financial risk into the capital investment decision process of generation planning, in this paper, we develop and analyse a generation unit selection process via the capital asset pricing model (CAPM). In particular, utilizing realistic data on gas-fired, coal-fired, and wind power generation units, we show which and how concrete steps can be taken for generation planning purposes. It is hoped that the generation unit selection process developed in this paper will help utilities in the area of effective and efficient generation planning when financial risks are considered. (Author)

  5. FINANCIAL FUTURE PROSPECT INVESTIGATION USING BANKRUPTCY FORECASTING MODELS IN HUNGARIAN MEAT PROCESSING INDUSTRY

    Directory of Open Access Journals (Sweden)

    Dalma Peto

    2015-07-01

    Full Text Available Our main research topic is the analysis of leading companies in the Hungarian meat processing industry in terms of liquidity criteria. We examine this scientific subject by application of financial indicators and several important bankruptcy forecasting models. In our thesis the emphasis is placed on the presentation and evaluation of business failure models. The topicality of the research subject is rooted in the economic crisis and recession, which made solvency a key issue. Maintaining the competitive position in the market and the ability to stay in competition depend on the capability to generate an appropriate level of net operative cash flow. The most important research questions are the following. Which financial methods can be used to predict and estimate the situation when a company is facing bankruptcy? Do bankruptcy forecasting models provide accurate forecasts and what conclusions can be drawn based on these results? In our study we present the actual economic situation and the main problems of the sector, select the sample companies, calculate and compare the applied financial ratios and the most relevant bankruptcy forecasting models. On the basis of annual reports concerning 2010-2013 interval we investigate the financial position of leading pork processing companies. We make a comprehensive and comparative analysis concerning capital structure, liquidity, and profitability; consequently identify risky processes and companies having high probability of insolvency. Finally, we demonstrate and evaluate the results of three traditional bankruptcy forecasting models (Altman, Springate, and Fulmer and four modern models (DA, LR, industrial DA and industrial LR.

  6. Halo models of HI selected galaxies

    Science.gov (United States)

    Paul, Niladri; Choudhury, Tirthankar Roy; Paranjape, Aseem

    2018-06-01

    Modelling the distribution of neutral hydrogen (HI) in dark matter halos is important for studying galaxy evolution in the cosmological context. We use a novel approach to infer the HI-dark matter connection at the massive end (m_H{I} > 10^{9.8} M_{⊙}) from radio HI emission surveys, using optical properties of low-redshift galaxies as an intermediary. In particular, we use a previously calibrated optical HOD describing the luminosity- and colour-dependent clustering of SDSS galaxies and describe the HI content using a statistical scaling relation between the optical properties and HI mass. This allows us to compute the abundance and clustering properties of HI-selected galaxies and compare with data from the ALFALFA survey. We apply an MCMC-based statistical analysis to constrain the free parameters related to the scaling relation. The resulting best-fit scaling relation identifies massive HI galaxies primarily with optically faint blue centrals, consistent with expectations from galaxy formation models. We compare the Hi-stellar mass relation predicted by our model with independent observations from matched Hi-optical galaxy samples, finding reasonable agreement. As a further application, we make some preliminary forecasts for future observations of HI and optical galaxies in the expected overlap volume of SKA and Euclid/LSST.

  7. Computer-aided tool for solvent selection in pharmaceutical processes: Solvent swap

    DEFF Research Database (Denmark)

    Papadakis, Emmanouil; K. Tula, Anjan; Gernaey, Krist V.

    -liquid equilibria). The application of the developed model-based framework is highlighted through several cases studies published in the literature. In the current state, the framework is suitable for problems where the original solvent is exchanged by distillation. A solvent selection guide for fast of suitable......-aided framework with the objective to assist the pharmaceutical industry in gaining better process understanding. A software interface to improve the usability of the tool has been created also....

  8. Understanding Managers Decision Making Process for Tools Selection in the Core Front End of Innovation

    DEFF Research Database (Denmark)

    Appio, Francesco P.; Achiche, Sofiane; McAloone, Tim C.

    2011-01-01

    New product development (NPD) describes the process of bringing a new product or service to the market. The Fuzzy Front End (FFE) of Innovation is the term describing the activities happening before the product development phase of NPD. In the FFE of innovation, several tools are used to facilita...... hypotheses are tested. A preliminary version of a theoretical model depicting the decision process of managers during tools selection in the FFE is proposed. The theoretical model is built from the constructed hypotheses....

  9. Managing the Public Sector Research and Development Portfolio Selection Process: A Case Study of Quantitative Selection and Optimization

    Science.gov (United States)

    2016-09-01

    PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION by Jason A. Schwartz...PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION 5. FUNDING NUMBERS 6...describing how public sector organizations can implement a research and development (R&D) portfolio optimization strategy to maximize the cost

  10. Comparing Sensory Information Processing and Alexithymia between People with Substance Dependency and Normal.

    Science.gov (United States)

    Bashapoor, Sajjad; Hosseini-Kiasari, Seyyedeh Tayebeh; Daneshvar, Somayeh; Kazemi-Taskooh, Zeinab

    2015-01-01

    Sensory information processing and alexithymia are two important factors in determining behavioral reactions. Some studies explain the effect of the sensitivity of sensory processing and alexithymia in the tendency to substance abuse. Giving that, the aim of the current study was to compare the styles of sensory information processing and alexithymia between substance-dependent people and normal ones. The research method was cross-sectional and the statistical population of the current study comprised of all substance-dependent men who are present in substance quitting camps of Masal, Iran, in October 2013 (n = 78). 36 persons were selected randomly by simple randomly sampling method from this population as the study group, and 36 persons were also selected among the normal population in the same way as the comparison group. Both groups was evaluated by using Toronto alexithymia scale (TAS) and adult sensory profile, and the multivariate analysis of variance (MANOVA) test was applied to analyze data. The results showed that there are significance differences between two groups in low registration (P processing and difficulty in describing emotions (P process sensory information in a different way than normal people and show more alexithymia features than them.

  11. The site selection process for a spent fuel repository in Finland. Summary report

    Energy Technology Data Exchange (ETDEWEB)

    McEwen, T. [EnvirosQuantiSci (United Kingdom); Aeikaes, T. [Posiva Oy, Helsinki (Finland)

    2000-12-01

    This Summary Report describes the Finnish programme for the selection and characterisation of potential sites for the deep disposal of spent nuclear fuel and explains the process by which Olkiluoto has been selected as the single site proposed for the development of a spent fuel disposal facility. Its aim is to provide an overview of this process, initiated almost twenty years ago, which has entered its final phase. It provides information in three areas: a review of the early site selection criteria, a description of the site selection process, including all the associated site characterisation work, up to the point at which a single site was selected and an outline of the proposed work, in particular that proposed underground, to characterise further the Olkiluoto site. In 1983 the Finnish Government made a policy decision on the management of nuclear waste in which the main goals and milestones for the site selection programme for the deep disposal of spent fuel were presented. According to this decision several site candidates, whose selection was to be based on careful studies of the whole country, should be characterised and the site for the repository selected by the end of the year 2000. This report describes the process by which this policy decision has been achieved. The report begins with a discussion of the definition of the geological and environmental site selection criteria and how they were applied in order to select a small number of sites, five in all, that were to be the subject of the preliminary investigations. The methods used to investigate these sites and the results of these investigations are described, as is the evaluation of the results of these investigations and the process used to discard two of the sites and continue more detailed investigations at the remaining three. The detailed site investigations that commenced in 1993 are described with respect to the overall strategy followed and the investigation techniques applied. The

  12. Process-Improvement Cost Model for the Emergency Department.

    Science.gov (United States)

    Dyas, Sheila R; Greenfield, Eric; Messimer, Sherri; Thotakura, Swati; Gholston, Sampson; Doughty, Tracy; Hays, Mary; Ivey, Richard; Spalding, Joseph; Phillips, Robin

    2015-01-01

    The objective of this report is to present a simplified, activity-based costing approach for hospital emergency departments (EDs) to use with Lean Six Sigma cost-benefit analyses. The cost model complexity is reduced by removing diagnostic and condition-specific costs, thereby revealing the underlying process activities' cost inefficiencies. Examples are provided for evaluating the cost savings from reducing discharge delays and the cost impact of keeping patients in the ED (boarding) after the decision to admit has been made. The process-improvement cost model provides a needed tool in selecting, prioritizing, and validating Lean process-improvement projects in the ED and other areas of patient care that involve multiple dissimilar diagnoses.

  13. Decision making model design for antivirus software selection using Factor Analysis and Analytical Hierarchy Process

    OpenAIRE

    Nurhayati Ai; Gautama Aditya; Naseer Muchammad

    2018-01-01

    Virus spread increase significantly through the internet in 2017. One of the protection method is using antivirus software. The wide variety of antivirus software in the market tends to creating confusion among consumer. Selecting the right antivirus according to their needs has become difficult. This is the reason we conduct our research. We formulate a decision making model for antivirus software consumer. The model is constructed by using factor analysis and AHP method. First we spread que...

  14. An Introduction to Model Selection: Tools and Algorithms

    Directory of Open Access Journals (Sweden)

    Sébastien Hélie

    2006-03-01

    Full Text Available Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans prevents the use of Popper’s falsification principle (which is the norm in other sciences. Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood, as well as five methods to compare model fits (the likelihood ratio test, Akaike’s information criterion, the Bayesian information criterion, bootstrapping and cross-validation, are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed.

  15. Neuroscientific Model of Motivational Process

    Science.gov (United States)

    Kim, Sung-il

    2013-01-01

    Considering the neuroscientific findings on reward, learning, value, decision-making, and cognitive control, motivation can be parsed into three sub processes, a process of generating motivation, a process of maintaining motivation, and a process of regulating motivation. I propose a tentative neuroscientific model of motivational processes which consists of three distinct but continuous sub processes, namely reward-driven approach, value-based decision-making, and goal-directed control. Reward-driven approach is the process in which motivation is generated by reward anticipation and selective approach behaviors toward reward. This process recruits the ventral striatum (reward area) in which basic stimulus-action association is formed, and is classified as an automatic motivation to which relatively less attention is assigned. By contrast, value-based decision-making is the process of evaluating various outcomes of actions, learning through positive prediction error, and calculating the value continuously. The striatum and the orbitofrontal cortex (valuation area) play crucial roles in sustaining motivation. Lastly, the goal-directed control is the process of regulating motivation through cognitive control to achieve goals. This consciously controlled motivation is associated with higher-level cognitive functions such as planning, retaining the goal, monitoring the performance, and regulating action. The anterior cingulate cortex (attention area) and the dorsolateral prefrontal cortex (cognitive control area) are the main neural circuits related to regulation of motivation. These three sub processes interact with each other by sending reward prediction error signals through dopaminergic pathway from the striatum and to the prefrontal cortex. The neuroscientific model of motivational process suggests several educational implications with regard to the generation, maintenance, and regulation of motivation to learn in the learning environment. PMID:23459598

  16. Neuroscientific model of motivational process.

    Science.gov (United States)

    Kim, Sung-Il

    2013-01-01

    Considering the neuroscientific findings on reward, learning, value, decision-making, and cognitive control, motivation can be parsed into three sub processes, a process of generating motivation, a process of maintaining motivation, and a process of regulating motivation. I propose a tentative neuroscientific model of motivational processes which consists of three distinct but continuous sub processes, namely reward-driven approach, value-based decision-making, and goal-directed control. Reward-driven approach is the process in which motivation is generated by reward anticipation and selective approach behaviors toward reward. This process recruits the ventral striatum (reward area) in which basic stimulus-action association is formed, and is classified as an automatic motivation to which relatively less attention is assigned. By contrast, value-based decision-making is the process of evaluating various outcomes of actions, learning through positive prediction error, and calculating the value continuously. The striatum and the orbitofrontal cortex (valuation area) play crucial roles in sustaining motivation. Lastly, the goal-directed control is the process of regulating motivation through cognitive control to achieve goals. This consciously controlled motivation is associated with higher-level cognitive functions such as planning, retaining the goal, monitoring the performance, and regulating action. The anterior cingulate cortex (attention area) and the dorsolateral prefrontal cortex (cognitive control area) are the main neural circuits related to regulation of motivation. These three sub processes interact with each other by sending reward prediction error signals through dopaminergic pathway from the striatum and to the prefrontal cortex. The neuroscientific model of motivational process suggests several educational implications with regard to the generation, maintenance, and regulation of motivation to learn in the learning environment.

  17. Towards the Significance of Decision Aid in Building Information Modeling (BIM Software Selection Process

    Directory of Open Access Journals (Sweden)

    Omar Mohd Faizal

    2014-01-01

    Full Text Available Building Information Modeling (BIM has been considered as a solution in construction industry to numerous problems such as delays, increased lead in times and increased costs. This is due to the concept and characteristic of BIM that will reshaped the way construction project teams work together to increase productivity and improve the final project outcomes (cost, time, quality, safety, functionality, maintainability, etc.. As a result, the construction industry has witnesses numerous of BIM software available in market. Each of this software has offers different function, features. Furthermore, the adoption of BIM required high investment on software, hardware and also training expenses. Thus, there is indentified that there is a need of decision aid for appropriated BIM software selection that fulfill the project needs. However, research indicates that there is limited study attempt to guide decision in BIM software selection problem. Thus, this paper highlight the importance of decision making and support for BIM software selection as it is vital to increase productivity, construction project throughout building lifecycle.

  18. Selection of Models for Ingestion Pathway and Relocation

    International Nuclear Information System (INIS)

    Blanchard, A.; Thompson, J.M.

    1998-01-01

    The area in which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models are considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities. The most recent Food and Drug Administration Derived Intervention Levels (August 1998) are adopted as evaluation guidelines for ingestion pathways

  19. Selection of Models for Ingestion Pathway and Relocation

    International Nuclear Information System (INIS)

    Blanchard, A.; Thompson, J.M.

    1999-01-01

    The area in which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models are considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities. The most recent Food and Drug Administration Derived Intervention Levels (August 1998) are adopted as evaluation guidelines for ingestion pathways

  20. Material and process selection using product examples

    DEFF Research Database (Denmark)

    Lenau, Torben Anker

    2001-01-01

    The objective of the paper is to suggest a different procedure for selecting materials and processes within the product development work. The procedure includes using product examples in order to increase the number of alternative materials and processes that is considered. Product examples can c...... a search engine, and through hyperlinks can relevant materials and processes be explored. Realising that designers are very sensitive to user interfaces do all descriptions of materials, processes and products include graphical descriptions, i.e. pictures or computer graphics....

  1. Material and process selection using product examples

    DEFF Research Database (Denmark)

    Lenau, Torben Anker

    2002-01-01

    The objective of the paper is to suggest a different procedure for selecting materials and processes within the product development work. The procedure includes using product examples in order to increase the number of alternative materials and processes that is considered. Product examples can c...... a search engine, and through hyperlinks can relevant materials and processes be explored. Realising that designers are very sensitive to user interfaces do all descriptions of materials, processes and products include graphical descriptions, i.e. pictures or computer graphics....

  2. The digital storytelling process: A comparative analysis from various experts

    Science.gov (United States)

    Hussain, Hashiroh; Shiratuddin, Norshuhada

    2016-08-01

    Digital Storytelling (DST) is a method of delivering information to the audience. It combines narrative and digital media content infused with the multimedia elements. In order for the educators (i.e the designers) to create a compelling digital story, there are sets of processes introduced by experts. Nevertheless, the experts suggest varieties of processes to guide them; of which some are redundant. The main aim of this study is to propose a single guide process for the creation of DST. A comparative analysis is employed where ten DST models from various experts are analysed. The process can also be implemented in other multimedia materials that used the concept of DST.

  3. Effects of stimulus order on discrimination processes in comparative and equality judgements: data and models.

    Science.gov (United States)

    Dyjas, Oliver; Ulrich, Rolf

    2014-01-01

    In typical discrimination experiments, participants are presented with a constant standard and a variable comparison stimulus and their task is to judge which of these two stimuli is larger (comparative judgement). In these experiments, discrimination sensitivity depends on the temporal order of these stimuli (Type B effect) and is usually higher when the standard precedes rather than follows the comparison. Here, we outline how two models of stimulus discrimination can account for the Type B effect, namely the weighted difference model (or basic Sensation Weighting model) and the Internal Reference Model. For both models, the predicted psychometric functions for comparative judgements as well as for equality judgements, in which participants indicate whether they perceived the two stimuli to be equal or not equal, are derived and it is shown that the models also predict a Type B effect for equality judgements. In the empirical part, the models' predictions are evaluated. To this end, participants performed a duration discrimination task with comparative judgements and with equality judgements. In line with the models' predictions, a Type B effect was observed for both judgement types. In addition, a time-order error, as indicated by shifts of the psychometric functions, and differences in response times were observed only for the equality judgement. Since both models entail distinct additional predictions, it seems worthwhile for future research to unite the two models into one conceptual framework.

  4. Computational Process Modeling for Additive Manufacturing (OSU)

    Science.gov (United States)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  5. Model Selection with the Linear Mixed Model for Longitudinal Data

    Science.gov (United States)

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  6. GEOQUIMICO : an interactive tool for comparing sorption conceptual models (surface complexation modeling versus K[D])

    International Nuclear Information System (INIS)

    Hammond, Glenn E.; Cygan, Randall Timothy

    2007-01-01

    Within reactive geochemical transport, several conceptual models exist for simulating sorption processes in the subsurface. Historically, the K D approach has been the method of choice due to ease of implementation within a reactive transport model and straightforward comparison with experimental data. However, for modeling complex sorption phenomenon (e.g. sorption of radionuclides onto mineral surfaces), this approach does not systematically account for variations in location, time, or chemical conditions, and more sophisticated methods such as a surface complexation model (SCM) must be utilized. It is critical to determine which conceptual model to use; that is, when the material variation becomes important to regulatory decisions. The geochemical transport tool GEOQUIMICO has been developed to assist in this decision-making process. GEOQUIMICO provides a user-friendly framework for comparing the accuracy and performance of sorption conceptual models. The model currently supports the K D and SCM conceptual models. The code is written in the object-oriented Java programming language to facilitate model development and improve code portability. The basic theory underlying geochemical transport and the sorption conceptual models noted above is presented in this report. Explanations are provided of how these physicochemical processes are instrumented in GEOQUIMICO and a brief verification study comparing GEOQUIMICO results to data found in the literature is given

  7. A data-driven multi-model methodology with deep feature selection for short-term wind forecasting

    International Nuclear Information System (INIS)

    Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias; Zhang, Jie

    2017-01-01

    Highlights: • An ensemble model is developed to produce both deterministic and probabilistic wind forecasts. • A deep feature selection framework is developed to optimally determine the inputs to the forecasting methodology. • The developed ensemble methodology has improved the forecasting accuracy by up to 30%. - Abstract: With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by first layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.

  8. Stability of choice in the honey bee nest-site selection process.

    Science.gov (United States)

    Nevai, Andrew L; Passino, Kevin M; Srinivasan, Parthasarathy

    2010-03-07

    We introduce a pair of compartment models for the honey bee nest-site selection process that lend themselves to analytic methods. The first model represents a swarm of bees deciding whether a site is viable, and the second characterizes its ability to select between two viable sites. We find that the one-site assessment process has two equilibrium states: a disinterested equilibrium (DE) in which the bees show no interest in the site and an interested equilibrium (IE) in which bees show interest. In analogy with epidemic models, we define basic and absolute recruitment numbers (R(0) and B(0)) as measures of the swarm's sensitivity to dancing by a single bee. If R(0) is less than one then the DE is locally stable, and if B(0) is less than one then it is globally stable. If R(0) is greater than one then the DE is unstable and the IE is stable under realistic conditions. In addition, there exists a critical site quality threshold Q(*) above which the site can attract some interest (at equilibrium) and below which it cannot. We also find the existence of a second critical site quality threshold Q(**) above which the site can attract a quorum (at equilibrium) and below which it cannot. The two-site discrimination process, in which we examine a swarm's ability to simultaneously consider two sites differing in both site quality and discovery time, has a stable DE if and only if both sites' individual basic recruitment numbers are less than one. Numerical experiments are performed to study the influences of site quality on quorum time and the outcome of competition between a lower quality site discovered first and a higher quality site discovered second. 2009 Elsevier Ltd. All rights reserved.

  9. Distillation modeling for a uranium refining process

    Energy Technology Data Exchange (ETDEWEB)

    Westphal, B.R.

    1996-03-01

    As part of the spent fuel treatment program at Argonne National Laboratory, a vacuum distillation process is being employed for the recovery of uranium following an electrorefining process. Distillation of a salt electrolyte, containing a eutectic mixture of lithium and potassium chlorides, from uranium is achieved by a simple batch operation and is termed {open_quotes}cathode processing{close_quotes}. The incremental distillation of electrolyte salt will be modeled by an equilibrium expression and on a molecular basis since the operation is conducted under moderate vacuum conditions. As processing continues, the two models will be compared and analyzed for correlation with actual operating results. Possible factors that may contribute to aberrations from the models include impurities at the vapor-liquid boundary, distillate reflux, anomalous pressure gradients, and mass transport phenomena at the evaporating surface. Ultimately, the purpose of either process model is to enable the parametric optimization of the process.

  10. Dealing with selection bias in educational transition models

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier

    2011-01-01

    This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational tr...... account for selection on unobserved variables and high-quality data are both required in order to estimate credible educational transition models.......This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational...... transitions to be correlated across transitions. We use simulated and real data to illustrate how the BPSM improves on the traditional Mare model in terms of correcting for selection bias and providing credible estimates of the effect of family background on educational success. We conclude that models which...

  11. A review of channel selection algorithms for EEG signal processing

    Science.gov (United States)

    Alotaiby, Turky; El-Samie, Fathi E. Abd; Alshebeili, Saleh A.; Ahmad, Ishtiaq

    2015-12-01

    Digital processing of electroencephalography (EEG) signals has now been popularly used in a wide variety of applications such as seizure detection/prediction, motor imagery classification, mental task classification, emotion classification, sleep state classification, and drug effects diagnosis. With the large number of EEG channels acquired, it has become apparent that efficient channel selection algorithms are needed with varying importance from one application to another. The main purpose of the channel selection process is threefold: (i) to reduce the computational complexity of any processing task performed on EEG signals by selecting the relevant channels and hence extracting the features of major importance, (ii) to reduce the amount of overfitting that may arise due to the utilization of unnecessary channels, for the purpose of improving the performance, and (iii) to reduce the setup time in some applications. Signal processing tools such as time-domain analysis, power spectral estimation, and wavelet transform have been used for feature extraction and hence for channel selection in most of channel selection algorithms. In addition, different evaluation approaches such as filtering, wrapper, embedded, hybrid, and human-based techniques have been widely used for the evaluation of the selected subset of channels. In this paper, we survey the recent developments in the field of EEG channel selection methods along with their applications and classify these methods according to the evaluation approach.

  12. Feature selection using genetic algorithm for breast cancer diagnosis: experiment on three different datasets

    NARCIS (Netherlands)

    Aalaei, Shokoufeh; Shahraki, Hadi; Rowhanimanesh, Alireza; Eslami, Saeid

    2016-01-01

    This study addresses feature selection for breast cancer diagnosis. The present process uses a wrapper approach using GA-based on feature selection and PS-classifier. The results of experiment show that the proposed model is comparable to the other models on Wisconsin breast cancer datasets. To

  13. Five Guidelines for Selecting Hydrological Signatures

    Science.gov (United States)

    McMillan, H. K.; Westerberg, I.; Branger, F.

    2017-12-01

    Hydrological signatures are index values derived from observed or modeled series of hydrological data such as rainfall, flow or soil moisture. They are designed to extract relevant information about hydrological behavior, such as to identify dominant processes, and to determine the strength, speed and spatiotemporal variability of the rainfall-runoff response. Hydrological signatures play an important role in model evaluation. They allow us to test whether particular model structures or parameter sets accurately reproduce the runoff generation processes within the watershed of interest. Most modeling studies use a selection of different signatures to capture different aspects of the catchment response, for example evaluating overall flow distribution as well as high and low flow extremes and flow timing. Such studies often choose their own set of signatures, or may borrow subsets of signatures used in multiple other works. The link between signature values and hydrological processes is not always straightforward, leading to uncertainty and variability in hydrologists' signature choices. In this presentation, we aim to encourage a more rigorous approach to hydrological signature selection, which considers the ability of signatures to represent hydrological behavior and underlying processes for the catchment and application in question. To this end, we propose a set of guidelines for selecting hydrological signatures. We describe five criteria that any hydrological signature should conform to: Identifiability, Robustness, Consistency, Representativeness, and Discriminatory Power. We describe an example of the design process for a signature, assessing possible signature designs against the guidelines above. Due to their ubiquity, we chose a signature related to the Flow Duration Curve, selecting the FDC mid-section slope as a proposed signature to quantify catchment overall behavior and flashiness. We demonstrate how assessment against each guideline could be used to

  14. Simulation of the selective oxidation process of semiconductors

    International Nuclear Information System (INIS)

    Chahoud, M.

    2012-01-01

    A new approach to simulate the selective oxidation of semiconductors is presented. This approach is based on the so-called b lack box simulation method . This method is usually used to simulate complex processes. The chemical and physical details within the process are not considered. Only the input and output data of the process are relevant for the simulation. A virtual function linking the input and output data has to be found. In the case of selective oxidation the input data are the mask geometry and the oxidation duration whereas the output data are the oxidation thickness distribution. The virtual function is determined as four virtual diffusion processes between the masked und non-masked areas. Each process delivers one part of the oxidation profile. The method is applied successfully on the oxidation system silicon-silicon nitride (Si-Si 3 N 4 ). The fitting parameters are determined through comparison of experimental and simulation results two-dimensionally.(author)

  15. Process Design Aspects for Scandium-Selective Leaching of Bauxite Residue with Sulfuric Acid

    OpenAIRE

    Konstantinos Hatzilyberis; Theopisti Lymperopoulou; Lamprini-Areti Tsakanika; Klaus-Michael Ochsenkühn; Paraskevas Georgiou; Nikolaos Defteraios; Fotios Tsopelas; Maria Ochsenkühn-Petropoulou

    2018-01-01

    Aiming at the industrial scale development of a Scandium (Sc)-selective leaching process of Bauxite Residue (BR), a set of process design aspects has been investigated. The interpretation of experimental data for Sc leaching yield, with sulfuric acid as the leaching solvent, has shown significant impact from acid feed concentration, mixing time, liquid to solids ratio (L/S), and number of cycles of leachate re-usage onto fresh BR. The thin film diffusion model, as the fundamental theory for l...

  16. Red Queen Processes Drive Positive Selection on Major Histocompatibility Complex (MHC Genes.

    Directory of Open Access Journals (Sweden)

    Maciej Jan Ejsmond

    2015-11-01

    Full Text Available Major Histocompatibility Complex (MHC genes code for proteins involved in the incitation of the adaptive immune response in vertebrates, which is achieved through binding oligopeptides (antigens of pathogenic origin. Across vertebrate species, substitutions of amino acids at sites responsible for the specificity of antigen binding (ABS are positively selected. This is attributed to pathogen-driven balancing selection, which is also thought to maintain the high polymorphism of MHC genes, and to cause the sharing of allelic lineages between species. However, the nature of this selection remains controversial. We used individual-based computer simulations to investigate the roles of two phenomena capable of maintaining MHC polymorphism: heterozygote advantage and host-pathogen arms race (Red Queen process. Our simulations revealed that levels of MHC polymorphism were high and driven mostly by the Red Queen process at a high pathogen mutation rate, but were low and driven mostly by heterozygote advantage when the pathogen mutation rate was low. We found that novel mutations at ABSs are strongly favored by the Red Queen process, but not by heterozygote advantage, regardless of the pathogen mutation rate. However, while the strong advantage of novel alleles increased the allele turnover rate, under a high pathogen mutation rate, allelic lineages persisted for a comparable length of time under Red Queen and under heterozygote advantage. Thus, when pathogens evolve quickly, the Red Queen is capable of explaining both positive selection and long coalescence times, but the tension between the novel allele advantage and persistence of alleles deserves further investigation.

  17. Purposeful selection of variables in logistic regression

    Directory of Open Access Journals (Sweden)

    Williams David Keith

    2008-12-01

    Full Text Available Abstract Background The main problem in many model-building situations is to choose from a large set of covariates those that should be included in the "best" model. A decision to keep a variable in the model might be based on the clinical or statistical significance. There are several variable selection algorithms in existence. Those methods are mechanical and as such carry some limitations. Hosmer and Lemeshow describe a purposeful selection of covariates within which an analyst makes a variable selection decision at each step of the modeling process. Methods In this paper we introduce an algorithm which automates that process. We conduct a simulation study to compare the performance of this algorithm with three well documented variable selection procedures in SAS PROC LOGISTIC: FORWARD, BACKWARD, and STEPWISE. Results We show that the advantage of this approach is when the analyst is interested in risk factor modeling and not just prediction. In addition to significant covariates, this variable selection procedure has the capability of retaining important confounding variables, resulting potentially in a slightly richer model. Application of the macro is further illustrated with the Hosmer and Lemeshow Worchester Heart Attack Study (WHAS data. Conclusion If an analyst is in need of an algorithm that will help guide the retention of significant covariates as well as confounding ones they should consider this macro as an alternative tool.

  18. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    International Nuclear Information System (INIS)

    Zhou, Z; Folkert, M; Wang, J

    2016-01-01

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  19. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Z; Folkert, M; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  20. Natural Selection as an Emergent Process: Instructional Implications

    Science.gov (United States)

    Cooper, Robert A.

    2017-01-01

    Student reasoning about cases of natural selection is often plagued by errors that stem from miscategorising selection as a direct, causal process, misunderstanding the role of randomness, and from the intuitive ideas of intentionality, teleology and essentialism. The common thread throughout many of these reasoning errors is a failure to apply…

  1. [Influence of Spectral Pre-Processing on PLS Quantitative Model of Detecting Cu in Navel Orange by LIBS].

    Science.gov (United States)

    Li, Wen-bing; Yao, Lin-tao; Liu, Mu-hua; Huang, Lin; Yao, Ming-yin; Chen, Tian-bing; He, Xiu-wen; Yang, Ping; Hu, Hui-qin; Nie, Jiang-hui

    2015-05-01

    Cu in navel orange was detected rapidly by laser-induced breakdown spectroscopy (LIBS) combined with partial least squares (PLS) for quantitative analysis, then the effect on the detection accuracy of the model with different spectral data ptetreatment methods was explored. Spectral data for the 52 Gannan navel orange samples were pretreated by different data smoothing, mean centralized and standard normal variable transform. Then 319~338 nm wavelength section containing characteristic spectral lines of Cu was selected to build PLS models, the main evaluation indexes of models such as regression coefficient (r), root mean square error of cross validation (RMSECV) and the root mean square error of prediction (RMSEP) were compared and analyzed. Three indicators of PLS model after 13 points smoothing and processing of the mean center were found reaching 0. 992 8, 3. 43 and 3. 4 respectively, the average relative error of prediction model is only 5. 55%, and in one word, the quality of calibration and prediction of this model are the best results. The results show that selecting the appropriate data pre-processing method, the prediction accuracy of PLS quantitative model of fruits and vegetables detected by LIBS can be improved effectively, providing a new method for fast and accurate detection of fruits and vegetables by LIBS.

  2. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  3. Financial performance as a decision criterion of credit scoring models selection [doi: 10.21529/RECADM.2017004

    Directory of Open Access Journals (Sweden)

    Rodrigo Alves Silva

    2017-09-01

    Full Text Available This paper aims to show the importance of the use of financial metrics in decision-making of credit scoring models selection. In order to achieve such, we considered an automatic approval system approach and we carried out a performance analysis of the financial metrics on the theoretical portfolios generated by seven credit scoring models based on main statistical learning techniques. The models were estimated on German Credit dataset and the results were analyzed based on four metrics: total accuracy, error cost, risk adjusted return on capital and Sharpe index. The results show that total accuracy, widely used as a criterion for selecting credit scoring models, is unable to select the most profitable model for the company, indicating the need to incorporate financial metrics into the credit scoring model selection process. Keywords Credit risk; Model’s selection; Statistical learning.

  4. Comparative study on the processing of armour steels with various unconventional technologies

    Science.gov (United States)

    Herghelegiu, E.; Schnakovszky, C.; Radu, M. C.; Tampu, N. C.; Zichil, V.

    2017-08-01

    The aim of the current paper is to analyse the suitability of three unconventional technologies - abrasive water jet (AWJ), plasma and laser - to process armour steels. In view of this, two materials (Ramor 400 and Ramor 550) were selected to carry out the experimental tests and the quality of cuts was quantified by considering the following characteristics: width of the processed surface at the jet inlet (Li), width of the processed surface at the jet outlet (Lo), inclination angle (a), deviation from perpendicularity (u), surface roughness (Ra) and surface hardness. It was fond that in terms of cut quality and environmental impact, the best results are offered by abrasive water jet technology. However, it has the lowest productivity comparing to the other two technologies.

  5. A model for the sustainable selection of building envelope assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Huedo, Patricia, E-mail: huedo@uji.es [Universitat Jaume I (Spain); Mulet, Elena, E-mail: emulet@uji.es [Universitat Jaume I (Spain); López-Mesa, Belinda, E-mail: belinda@unizar.es [Universidad de Zaragoza (Spain)

    2016-02-15

    The aim of this article is to define an evaluation model for the environmental impacts of building envelopes to support planners in the early phases of materials selection. The model is intended to estimate environmental impacts for different combinations of building envelope assemblies based on scientifically recognised sustainability indicators. These indicators will increase the amount of information that existing catalogues show to support planners in the selection of building assemblies. To define the model, first the environmental indicators were selected based on the specific aims of the intended sustainability assessment. Then, a simplified LCA methodology was developed to estimate the impacts applicable to three types of dwellings considering different envelope assemblies, building orientations and climate zones. This methodology takes into account the manufacturing, installation, maintenance and use phases of the building. Finally, the model was validated and a matrix in Excel was created as implementation of the model. - Highlights: • Method to assess the envelope impacts based on a simplified LCA • To be used at an earlier phase than the existing methods in a simple way. • It assigns a score by means of known sustainability indicators. • It estimates data about the embodied and operating environmental impacts. • It compares the investment costs with the costs of the consumed energy.

  6. A model for the sustainable selection of building envelope assemblies

    International Nuclear Information System (INIS)

    Huedo, Patricia; Mulet, Elena; López-Mesa, Belinda

    2016-01-01

    The aim of this article is to define an evaluation model for the environmental impacts of building envelopes to support planners in the early phases of materials selection. The model is intended to estimate environmental impacts for different combinations of building envelope assemblies based on scientifically recognised sustainability indicators. These indicators will increase the amount of information that existing catalogues show to support planners in the selection of building assemblies. To define the model, first the environmental indicators were selected based on the specific aims of the intended sustainability assessment. Then, a simplified LCA methodology was developed to estimate the impacts applicable to three types of dwellings considering different envelope assemblies, building orientations and climate zones. This methodology takes into account the manufacturing, installation, maintenance and use phases of the building. Finally, the model was validated and a matrix in Excel was created as implementation of the model. - Highlights: • Method to assess the envelope impacts based on a simplified LCA • To be used at an earlier phase than the existing methods in a simple way. • It assigns a score by means of known sustainability indicators. • It estimates data about the embodied and operating environmental impacts. • It compares the investment costs with the costs of the consumed energy.

  7. Selection Process of ERP Systems

    OpenAIRE

    Molnár, Bálint; Szabó, Gyula; Benczúr, András

    2013-01-01

    Background: The application and introduction of ERP systems have become a central issue for management and operation of enterprises. The competition on market enforces the improvement and optimization of business processes of enterprises to increase their efficiency, effectiveness, and to manage better the resources outside the company. The primary task of ERP systems is to achieve the before-mentioned objectives. Objective: The selection of a particular ERP system has a decisive effect on th...

  8. Selected Parameters of Micro-Jet Cooling Gases in Hybrid Spraying Process

    Directory of Open Access Journals (Sweden)

    Szczucka-Lasota B.

    2016-06-01

    Full Text Available The innovative technology, like thermal spraying with a micro-jet cooling is one of the important modification of classical ultrasonic spraying methods. Using of micro-stream with gases like argon or nitrogen allows to cool the coating immediately after spraying, and thereby reduce the time of transition during the injection of each layer. As a result of the process, the fine dispersive structure of coatings is obtained during the shorter time in comparable to the classical high velocity oxygen fuel process (HVOF. The parameter of process and the type of stream equipment determine the quality of the obtained structure and thermal stress in the coating. The article presents the relationship between selected parameters of hybrid process and properties of the coatings. The presented technology should be adapted to the actual production of protective coating for machines and construction working in wear conditions.

  9. A new general methodology for incorporating physico-chemical transformations into multi-phase wastewater treatment process models.

    Science.gov (United States)

    Lizarralde, I; Fernández-Arévalo, T; Brouckaert, C; Vanrolleghem, P; Ikumi, D S; Ekama, G A; Ayesa, E; Grau, P

    2015-05-01

    This paper introduces a new general methodology for incorporating physico-chemical and chemical transformations into multi-phase wastewater treatment process models in a systematic and rigorous way under a Plant-Wide modelling (PWM) framework. The methodology presented in this paper requires the selection of the relevant biochemical, chemical and physico-chemical transformations taking place and the definition of the mass transport for the co-existing phases. As an example a mathematical model has been constructed to describe a system for biological COD, nitrogen and phosphorus removal, liquid-gas transfer, precipitation processes, and chemical reactions. The capability of the model has been tested by comparing simulated and experimental results for a nutrient removal system with sludge digestion. Finally, a scenario analysis has been undertaken to show the potential of the obtained mathematical model to study phosphorus recovery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. A comparative study on effective dynamic modeling methods for flexible pipe

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang Ho; Hong, Sup; Kim, Hyung Woo [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of); Kim, Sung Soo [Chungnam National University, Daejeon (Korea, Republic of)

    2015-07-15

    In this paper, in order to select a suitable method that is applicable to the large deflection with a small strain problem of pipe systems in the deep seabed mining system, the finite difference method with lumped mass from the field of cable dynamics and the substructure method from the field of flexible multibody dynamics were compared. Due to the difficulty of obtaining experimental results from an actual pipe system in the deep seabed mining system, a thin cantilever beam model with experimental results was employed for the comparative study. Accuracy of the methods was investigated by comparing the experimental results and simulation results from the cantilever beam model with different numbers of elements. Efficiency of the methods was also examined by comparing the operational counts required for solving equations of motion. Finally, this cantilever beam model with comparative study results can be promoted to be a benchmark problem for the flexible multibody dynamics.

  11. Comparing Non-Medical Sex Selection and Saviour Sibling Selection in the Case of JS and LS v Patient Review Panel: Beyond the Welfare of the Child?

    Science.gov (United States)

    Smith, Malcolm K; Taylor-Sands, Michelle

    2018-03-01

    The national ethical guidelines relevant to assisted reproductive technology (ART) have recently been reviewed by the National Health and Medical Research Council (NHMRC). The review process paid particular attention to the issue of non-medical sex selection, although ultimately, the updated ethical guidelines maintain the pre-consultation position of a prohibition on non-medical sex selection. Whilst this recent review process provided a public forum for debate and discussion of this ethically contentious issue, the Victorian case of JS and LS v Patient Review Panel (Health and Privacy) [2011] VCAT 856 provides a rare instance where the prohibition on non-medical sex selection has been explored by a court or tribunal in Australia. This paper analyses the reasoning in that decision, focusing specifically on how the Victorian Civil and Administrative Tribunal applied the statutory framework relevant to ART and its comparison to other uses of embryo selection technologies. The Tribunal relied heavily upon the welfare-of-the-child principle under the Assisted Reproductive Treatment Act 2008 (Vic). The Tribunal also compared non-medical sex selection with saviour sibling selection (that is, where a child is purposely conceived as a matched tissue donor for an existing child of the family). Our analysis leads us to conclude that the Tribunal's reasoning fails to adequately justify the denial of the applicants' request to utilize ART services to select the sex of their prospective child.

  12. [GSH fermentation process modeling using entropy-criterion based RBF neural network model].

    Science.gov (United States)

    Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng

    2008-05-01

    The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.

  13. Comparative study on aerosol removal by natural processes in containment in severe accident for AP1000 reactor

    International Nuclear Information System (INIS)

    Sun, Xiaohui; Cao, Xinrong; Shi, Xingwei; Yan, Jin

    2017-01-01

    Highlights: • Characteristics of aerosol distribution in containment are obtained. • Aerosol removal by natural processes is comparative studied by two methods. • Traditional rapid assessment method is conservative and can be applied in AP1000 reactor. - Abstract: Focusing on aerosol removal by naturally occurring processes in containment in severe accident for AP1000, integral severe accident code MELCOR and rapid assessment method mentioned in NUREG/CR-6189 are utilized to study aerosol removal by natural processes, respectively. Three typical severe accidents, induced by large break loss of coolant accident (LBLOCA), small break loss of coolant accident (SBLOCA) and steam generator tube rupture (SGTR), respectively, are selected for the study. The results obtained by two methods were further compared in the following several aspects: efficiency of aerosol removal by natural processes, peak time of aerosol suspended in containment atmosphere, peak amount of aerosol suspended in containment atmosphere, time when aerosol removal efficiency by natural processes is up to 99.9%. It was further concluded that results obtained by rapid assessment with shorter calculation process are more conservative. The analysis results provide reference to assessment method selection of severe accident source term for AP1000 nuclear emergency.

  14. Can multi-criteria analysis models support the site selection for a repository for heat-generating waste?

    International Nuclear Information System (INIS)

    Gutberlet, Daniela

    2015-01-01

    The decision for or against a potential site for a nuclear waste repository is highly complex and requires decision-makers to consider multiple assessment criteria. The complexity of each site and its characteristics, and the differing opinions among members of the public and advocacy groups mean t hat conflicts of interest are likely to arise. In this paper, the author suggests that multi-criteria analysis models could be used to provide methodological support during the selection process. The models can map these types of decision situations and suggest coherent solutions with relatively little formal effort. They allow users to compare different opt ions simultaneously and ensure that t heir decision-making Is conscious rather than arbitrary.

  15. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, M.; Baker, N.A.; Duguid, J.O. [INTERA, Inc., Las Vegas, NV (United States)

    1994-04-04

    Since the 1960`s, ground-water flow models have been used for analysis of water resources problems. In the 1970`s, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970`s and well into the 1980`s focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M&O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing.

  16. Review and selection of unsaturated flow models

    International Nuclear Information System (INIS)

    Reeves, M.; Baker, N.A.; Duguid, J.O.

    1994-01-01

    Since the 1960's, ground-water flow models have been used for analysis of water resources problems. In the 1970's, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970's and well into the 1980's focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M ampersand O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M ampersand O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing

  17. Comparative analysis of crayfish marketing in selected markets of ...

    African Journals Online (AJOL)

    Comparative analysis of crayfish marketing in selected markets of Akwa Ibom and Abia States, Nigeria. ... It specifically looked at market integration, costs and return, marketing margin, marketing ... EMAIL FULL TEXT EMAIL FULL TEXT

  18. Method for Business Process Management System Selection

    OpenAIRE

    Westelaken, van de, Thijs; Terwee, Bas; Ravesteijn, Pascal

    2013-01-01

    In recent years business process management (BPM) and specifically information systems that support the analysis, design and execution of processes (also called business process management systems (BPMS)) are getting more attention. This has lead to an increase in research on BPM and BPMS. However the research on BPMS is mostly focused on the architecture of the system and how to implement such systems. How to select a BPM system that fits the strategy and goals of a specific organization is ...

  19. Supplier Selection by Coupling-Attribute Combinatorial Analysis

    Directory of Open Access Journals (Sweden)

    Xinyu Sun

    2017-01-01

    Full Text Available Increasing reliance on outsourcing has made supplier selection a critical success factor for a supply chain/network. In addition to cost, the synergy among product components and supplier selection criteria should be considered holistically during the supplier selection process. This paper shows this synergy using coupled-attribute analysis. The key coupling attributes, including total cost, quality, delivery reliability, and delivery lead time of the final product, are identified and formulated. A max-max model is designed to assist the selection of the optional combination of suppliers. The results are compared with the individual supplier selection. Management insights are also discussed.

  20. Modelling and measurements of urban aerosol processes on the neighborhood scale in Rotterdam, Oslo and Helsinki

    Science.gov (United States)

    Karl, M.; Kukkonen, J.; Keuken, M. P.; Lützenkirchen, S.; Pirjola, L.; Hussein, T.

    2015-12-01

    This study evaluates the influence of aerosol processes on the particle number (PN) concentrations in three major European cities on the temporal scale of one hour, i.e. on the neighborhood and city scales. We have used selected measured data of particle size distributions from previous campaigns in the cities of Helsinki, Oslo and Rotterdam. The aerosol transformation processes were evaluated using an aerosol dynamics model MAFOR, combined with a simplified treatment of roadside and urban atmospheric dispersion. We have compared the model predictions of particle number size distributions with the measured data, and conducted sensitivity analyses regarding the influence of various model input variables. We also present a simplified parameterization for aerosol processes, which is based on the more complex aerosol process computations; this simple model can easily be implemented to both Gaussian and Eulerian urban dispersion models. Aerosol processes considered in this study were (i) the coagulation of particles, (ii) the condensation and evaporation of n-alkanes, and (iii) dry deposition. The chemical transformation of gas-phase compounds was not taken into account. It was not necessary to model the nucleation of gas-phase vapors, as the computations were started with roadside conditions. Dry deposition and coagulation of particles were identified to be the most important aerosol dynamic processes that control the evolution and removal of particles. The effect of condensation and evaporation of organic vapors emitted by vehicles on particle numbers and on particle size distributions was examined. Under inefficient dispersion conditions, condensational growth contributed significantly to the evolution of PN from roadside to the neighborhood scale. The simplified parameterization of aerosol processes can predict particle number concentrations between roadside and the urban background with an inaccuracy of ∼ 10 %, compared to the fully size-resolved MAFOR model.

  1. First-Principles Integrated Adsorption Modeling for Selective Capture of Uranium from Seawater by Polyamidoxime Sorbent Materials.

    Science.gov (United States)

    Ladshaw, Austin P; Ivanov, Alexander S; Das, Sadananda; Bryantsev, Vyacheslav S; Tsouris, Costas; Yiacoumi, Sotira

    2018-04-18

    Nuclear power is a relatively carbon-free energy source that has the capacity to be utilized today in an effort to stem the tides of global warming. The growing demand for nuclear energy, however, could put significant strain on our uranium ore resources, and the mining activities utilized to extract that ore can leave behind long-term environmental damage. A potential solution to enhance the supply of uranium fuel is to recover uranium from seawater using amidoximated adsorbent fibers. This technology has been studied for decades but is currently plagued by the material's relatively poor selectivity of uranium over its main competitor vanadium. In this work, we investigate the binding schemes between uranium, vanadium, and the amidoxime functional groups on the adsorbent surface. Using quantum chemical methods, binding strengths are approximated for a set of complexation reactions between uranium and vanadium with amidoxime functionalities. Those approximations are then coupled with a comprehensive aqueous adsorption model developed in this work to simulate the adsorption of uranium and vanadium under laboratory conditions. Experimental adsorption studies with uranium and vanadium over a wide pH range are performed, and the data collected are compared against simulation results to validate the model. It was found that coupling ab initio calculations with process level adsorption modeling provides accurate predictions of the adsorption capacity and selectivity of the sorbent materials. Furthermore, this work demonstrates that this multiscale modeling paradigm could be utilized to aid in the selection of superior ligands or ligand compositions for the selective capture of metal ions. Therefore, this first-principles integrated modeling approach opens the door to the in silico design of next-generation adsorbents with potentially superior efficiency and selectivity for uranium over vanadium in seawater.

  2. Distillation modeling for a uranium refining process

    International Nuclear Information System (INIS)

    Westphal, B.R.

    1996-01-01

    As part of the spent fuel treatment program at Argonne National Laboratory, a vacuum distillation process is being employed for the recovery of uranium following an electrorefining process. Distillation of a salt electrolyte, containing a eutectic mixture of lithium and potassium chlorides, from uranium is achieved by a simple batch operation and is termed open-quotes cathode processingclose quotes. The incremental distillation of electrolyte salt will be modeled by an equilibrium expression and on a molecular basis since the operation is conducted under moderate vacuum conditions. As processing continues, the two models will be compared and analyzed for correlation with actual operating results. Possible factors that may contribute to aberrations from the models include impurities at the vapor-liquid boundary, distillate reflux, anomalous pressure gradients, and mass transport phenomena at the evaporating surface. Ultimately, the purpose of either process model is to enable the parametric optimization of the process

  3. Modeling of plant in vitro cultures: overview and estimation of biotechnological processes.

    Science.gov (United States)

    Maschke, Rüdiger W; Geipel, Katja; Bley, Thomas

    2015-01-01

    Plant cell and tissue cultivations are of growing interest for the production of structurally complex and expensive plant-derived products, especially in pharmaceutical production. Problems with up-scaling, low yields, and high-priced process conditions result in an increased demand for models to provide comprehension, simulation, and optimization of production processes. In the last 25 years, many models have evolved in plant biotechnology; the majority of them are specialized models for a few selected products or nutritional conditions. In this article we review, delineate, and discuss the concepts and characteristics of the most commonly used models. Therefore, the authors focus on models for plant suspension and submerged hairy root cultures. The article includes a short overview of modeling and mathematics and integrated parameters, as well as the application scope for each model. The review is meant to help researchers better understand and utilize the numerous models published for plant cultures, and to select the most suitable model for their purposes. © 2014 Wiley Periodicals, Inc.

  4. Neuroscientific Model of Motivational Process

    Directory of Open Access Journals (Sweden)

    Sung-Il eKim

    2013-03-01

    Full Text Available Considering the neuroscientific findings on reward, learning, value, decision-making, and cognitive control, motivation can be parsed into three subprocesses, a process of generating motivation, a process of maintaining motivation, and a process of regulating motivation. I propose a tentative neuroscientific model of motivational processes which consists of three distinct but continuous subprocesses, namely reward-driven approach, value-based decision making, and goal-directed control. Reward-driven approach is the process in which motivation is generated by reward anticipation and selective approach behaviors toward reward. This process recruits the ventral striatum (reward area in which basic stimulus-action association is formed, and is classified as an automatic motivation to which relatively less attention is assigned. By contrast, value-based decision making is the process of evaluating various outcomes of actions, learning through positive prediction error, and calculating the value continuously. The striatum and the orbitofrontal cortex (valuation area play crucial roles in sustaining motivation. Lastly, the goal-directed control is the process of regulating motivation through cognitive control to achieve goals. This consciously controlled motivation is associated with higher-level cognitive functions such as planning, retaining the goal, monitoring the performance, and regulating action. The anterior cingulate cortex (attention area and the dorsolateral prefrontal cortex (cognitive control area are the main neural circuits related to regulation of motivation. These three subprocesses interact with each other by sending reward prediction error signals through dopaminergic pathway from the striatum and to the prefrontal cortex. The neuroscientific model of motivational process suggests several educational implications with regard to the generation, maintenance, and regulation of motivation to learn in the learning environment.

  5. A multidimensional analysis and modelling of flotation process for selected Polish lithological copper ore types

    Directory of Open Access Journals (Sweden)

    Niedoba Tomasz

    2017-01-01

    Full Text Available The flotation of copper ore is a complex technological process that depends on many parameters. Therefore, it is necessary to take into account the complexity of this phenomenon by choosing a multidimensional data analysis. The paper presents the results of modelling and analysis of beneficiation process of sandstone copper ore. Considering the implementation of multidimensional statistical methods it was necessary to carry out a multi-level experiment, which included 4 parameters (size fraction, collector type and dosage, flotation time. The main aim of the paper was the preparation of flotation process models for the recovery and the content of the metal in products. A MANOVA was implemented to explore the relationship between dependent (β, ϑ, ε, η and independent (d, t, cd, ct variables. The design of models was based on linear and nonlinear regression. The results of the variation analysis indicated the high significance of all parameters for the process. The average degree of matching of linear models to experimental data was set at 49% and 33% for copper content in the concentrate and tailings and 47% for the recovery of copper minerals in the both. The results confirms the complexity and stochasticity of the Polish copper ore flotation.

  6. Integration of Fast Predictive Model and SLM Process Development Chamber, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This STTR project seeks to develop a fast predictive model for selective laser melting (SLM) processes and then integrate that model with an SLM chamber that allows...

  7. Comparative analyses of diffusion coefficients for different extraction processes from thyme

    Directory of Open Access Journals (Sweden)

    Petrovic Slobodan S.

    2012-01-01

    Full Text Available This work was aimed to analyze kinetics and mass transfer phenomena for different extraction processes from thyme (Thymus vulgaris L. leaves. Different extraction processes with ethanol were studied: Soxhlet extraction and ultrasound-assisted batch extraction on the laboratory scale as well as pilot plant batch extraction with mixing. The extraction processes with ethanol were compared to the process of supercritical carbon dioxide extraction performed at 10 MPa and 40°C. Experimental data were analyzed by mathematical model derived from the Fick’s second law to determine and compare diffusion coefficients in the periods of constant and decreasing extraction rate. In the fast extraction period, values of diffusion coefficients were one to three orders of magnitude higher compared to those determined for the period of slow extraction. The highest diffusion coefficient was reported for the fast extraction period of supercritical fluid extraction. In the case of extraction processes with ethanol, ultrasound, stirring and extraction temperature increase enhanced mass transfer rate in the washing phase. On the other hand, ultrasound contributed the most to the increase of mass transfer rate in the period of slow extraction.

  8. Island-Model Genomic Selection for Long-Term Genetic Improvement of Autogamous Crops.

    Science.gov (United States)

    Yabe, Shiori; Yamasaki, Masanori; Ebana, Kaworu; Hayashi, Takeshi; Iwata, Hiroyoshi

    2016-01-01

    Acceleration of genetic improvement of autogamous crops such as wheat and rice is necessary to increase cereal production in response to the global food crisis. Population and pedigree methods of breeding, which are based on inbred line selection, are used commonly in the genetic improvement of autogamous crops. These methods, however, produce a few novel combinations of genes in a breeding population. Recurrent selection promotes recombination among genes and produces novel combinations of genes in a breeding population, but it requires inaccurate single-plant evaluation for selection. Genomic selection (GS), which can predict genetic potential of individuals based on their marker genotype, might have high reliability of single-plant evaluation and might be effective in recurrent selection. To evaluate the efficiency of recurrent selection with GS, we conducted simulations using real marker genotype data of rice cultivars. Additionally, we introduced the concept of an "island model" inspired by evolutionary algorithms that might be useful to maintain genetic variation through the breeding process. We conducted GS simulations using real marker genotype data of rice cultivars to evaluate the efficiency of recurrent selection and the island model in an autogamous species. Results demonstrated the importance of producing novel combinations of genes through recurrent selection. An initial population derived from admixture of multiple bi-parental crosses showed larger genetic gains than a population derived from a single bi-parental cross in whole cycles, suggesting the importance of genetic variation in an initial population. The island-model GS better maintained genetic improvement in later generations than the other GS methods, suggesting that the island-model GS can utilize genetic variation in breeding and can retain alleles with small effects in the breeding population. The island-model GS will become a new breeding method that enhances the potential of genomic

  9. Island-Model Genomic Selection for Long-Term Genetic Improvement of Autogamous Crops.

    Directory of Open Access Journals (Sweden)

    Shiori Yabe

    Full Text Available Acceleration of genetic improvement of autogamous crops such as wheat and rice is necessary to increase cereal production in response to the global food crisis. Population and pedigree methods of breeding, which are based on inbred line selection, are used commonly in the genetic improvement of autogamous crops. These methods, however, produce a few novel combinations of genes in a breeding population. Recurrent selection promotes recombination among genes and produces novel combinations of genes in a breeding population, but it requires inaccurate single-plant evaluation for selection. Genomic selection (GS, which can predict genetic potential of individuals based on their marker genotype, might have high reliability of single-plant evaluation and might be effective in recurrent selection. To evaluate the efficiency of recurrent selection with GS, we conducted simulations using real marker genotype data of rice cultivars. Additionally, we introduced the concept of an "island model" inspired by evolutionary algorithms that might be useful to maintain genetic variation through the breeding process. We conducted GS simulations using real marker genotype data of rice cultivars to evaluate the efficiency of recurrent selection and the island model in an autogamous species. Results demonstrated the importance of producing novel combinations of genes through recurrent selection. An initial population derived from admixture of multiple bi-parental crosses showed larger genetic gains than a population derived from a single bi-parental cross in whole cycles, suggesting the importance of genetic variation in an initial population. The island-model GS better maintained genetic improvement in later generations than the other GS methods, suggesting that the island-model GS can utilize genetic variation in breeding and can retain alleles with small effects in the breeding population. The island-model GS will become a new breeding method that enhances the

  10. Refining processes of selected copper alloys

    Directory of Open Access Journals (Sweden)

    S. Rzadkosz

    2009-04-01

    Full Text Available The analysis of the refining effectiveness of the liquid copper and selected copper alloys by various micro additions and special refiningsubstances – was performed. Examinations of an influence of purifying, modifying and deoxidation operations performed in a metal bath on the properties of certain selected alloys based on copper matrix - were made. Refining substances, protecting-purifying slag, deoxidation and modifying substances containing micro additions of such elements as: zirconium, boron, phosphor, sodium, lithium, or their compounds introduced in order to change micro structures and properties of alloys, were applied in examinations. A special attention was directed to macro and micro structures of alloys, their tensile and elongation strength and hot-cracks sensitivity. Refining effects were estimated by comparing the effectiveness of micro structure changes with property changes of copper and its selected alloys from the group of tin bronzes.

  11. A CONCEPTUAL MODEL FOR IMPROVED PROJECT SELECTION AND PRIORITISATION

    Directory of Open Access Journals (Sweden)

    P. J. Viljoen

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Project portfolio management processes are often designed and operated as a series of stages (or project phases and gates. However, the flow of such a process is often slow, characterised by queues waiting for a gate decision and by repeated work from previous stages waiting for additional information or for re-processing. In this paper the authors propose a conceptual model that applies supply chain and constraint management principles to the project portfolio management process. An advantage of the proposed model is that it provides the ability to select and prioritise projects without undue changes to project schedules. This should result in faster flow through the system.

    AFRIKAANSE OPSOMMING: Prosesse om portefeuljes van projekte te bestuur word normaalweg ontwerp en bedryf as ’n reeks fases en hekke. Die vloei deur so ’n proses is dikwels stadig en word gekenmerk deur toue wat wag vir besluite by die hekke en ook deur herwerk van vorige fases wat wag vir verdere inligting of vir herprosessering. In hierdie artikel word ‘n konseptuele model voorgestel. Die model berus op die beginsels van voorsieningskettings sowel as van beperkingsbestuur, en bied die voordeel dat projekte geselekteer en geprioritiseer kan word sonder onnodige veranderinge aan projekskedules. Dit behoort te lei tot versnelde vloei deur die stelsel.

  12. Development of site selection process for an LILW repository in Slovenia

    International Nuclear Information System (INIS)

    Zeleznik, N.; Kralj, M.; Mele, I.; Veselic, M.

    2005-01-01

    The activities regarding the LILW repository site selection in Slovenia are planned to meet the requirements of the Act on Ionising Radiation Protection and Nuclear Safety, especially the requirement that the site for a repository should be selected by 2008 and the repository should be in operation by 2013. In November 2004, the official administrative procedure for the siting of the repository started with the first spatial public conference on spatial planning procedure. It was carried out by the Ministry of the Environment and Spatial Planning and ARAO. Immediately after the conference the Program for the preparation of the detailed plan of national importance for the LILW repository was accepted by the Ministry. At the beginning of December 2004, ARAO invited all Slovenian local communities to participate in the site selection process and volunteer a site or area in their local community for further investigation. At the beginning of April 2005 the first phase of the bidding process was concluded. ARAO received applications from eight local communities. A pre-feasibility study to define three of the most promising locations was conducted because only three locations are foreseen by the Program for the preparation of the detailed plan of national importance. Methodologies were prepared for assessment of different parameters of technical, financial, environmental and spatial suitability as well as public acceptability. Comparative, preferential and also exclusion criteria for the respective parameters were defined. The results of the cabinet and fieldwork research were compared and further assessed in order to obtain maximum three local communities with three potential sites in which the probability of siting the LILW repository seems to be the highest. Detailed plans of national importance will be prepared for these sites. (author)

  13. A comparative study of covariance selection models for the inference of gene regulatory networks.

    Science.gov (United States)

    Stifanelli, Patrizia F; Creanza, Teresa M; Anglani, Roberto; Liuzzi, Vania C; Mukherjee, Sayan; Schena, Francesco P; Ancona, Nicola

    2013-10-01

    The inference, or 'reverse-engineering', of gene regulatory networks from expression data and the description of the complex dependency structures among genes are open issues in modern molecular biology. In this paper we compared three regularized methods of covariance selection for the inference of gene regulatory networks, developed to circumvent the problems raising when the number of observations n is smaller than the number of genes p. The examined approaches provided three alternative estimates of the inverse covariance matrix: (a) the 'PINV' method is based on the Moore-Penrose pseudoinverse, (b) the 'RCM' method performs correlation between regression residuals and (c) 'ℓ(2C)' method maximizes a properly regularized log-likelihood function. Our extensive simulation studies showed that ℓ(2C) outperformed the other two methods having the most predictive partial correlation estimates and the highest values of sensitivity to infer conditional dependencies between genes even when a few number of observations was available. The application of this method for inferring gene networks of the isoprenoid biosynthesis pathways in Arabidopsis thaliana allowed to enlighten a negative partial correlation coefficient between the two hubs in the two isoprenoid pathways and, more importantly, provided an evidence of cross-talk between genes in the plastidial and the cytosolic pathways. When applied to gene expression data relative to a signature of HRAS oncogene in human cell cultures, the method revealed 9 genes (p-value<0.0005) directly interacting with HRAS, sharing the same Ras-responsive binding site for the transcription factor RREB1. This result suggests that the transcriptional activation of these genes is mediated by a common transcription factor downstream of Ras signaling. Software implementing the methods in the form of Matlab scripts are available at: http://users.ba.cnr.it/issia/iesina18/CovSelModelsCodes.zip. Copyright © 2013 The Authors. Published by

  14. The Ideal Criteria of Supplier Selection for SMEs Food Processing Industry

    OpenAIRE

    Ramlan Rohaizan; Engku Abu Bakar Engku Muhammad Nazri; Mahmud Fatimah; Ng Hooi Keng

    2016-01-01

    Selection of good supplier is important to determine the performance and profitability of SMEs food processing industry. The lack of managerial capability on supplier selection in SMEs food processing industry affects the competitiveness of SMEs food processing industry. This research aims to determine the ideal criteria of supplier for food processing industry using Analytical Hierarchy Process (AHP). The research was carried out in a quantitative method by distributing questionnaires to 50 ...

  15. Management Model for Evaluation and Selection of Engineering Equipment Suppliers for Construction Projects in Iraq

    Directory of Open Access Journals (Sweden)

    Kadhim Raheem Erzaij

    2016-06-01

    Full Text Available Engineering equipment is essential part in the construction project and usually manufactured with long lead times, large costs and special engineering requirements. Construction manager targets that equipment to be delivered in the site need date with the right quantity, appropriate cost and required quality, and this entails an efficient supplier can satisfy these targets. Selection of engineering equipment supplier is a crucial managerial process .it requires evaluation of multiple suppliers according to multiple criteria. This process is usually performed manually and based on just limited evaluation criteria, so better alternatives may be neglected. Three stages of survey comprised number of public and private companies in Iraqi construction sector were employed to identify main criteria and sub criteria for supplier selection and their priorities.The main criteria identified were quality of product, commercial aspect, delivery, reputation and position, and system quality . An effective technique in multiple criteria decision making (MCDM as analytical hierarchy process (AHP have been used to get importance weights of criteria based on experts judgment. Thereafter, a management software system for Evaluation and Selection of Engineering Equipment Suppliers (ESEES has been developed based on the results obtained from AHP. This model was validated in a case study at municipality of Baghdad involved actual cases of selection pumps suppliers for infrastructure projects .According to experts, this model can improve the current process followed in the supplier selection and aid decision makers to adopt better choices in the domain of selection engineering equipment suppliers.

  16. Modeling styles in business process modeling

    NARCIS (Netherlands)

    Pinggera, J.; Soffer, P.; Zugal, S.; Weber, B.; Weidlich, M.; Fahland, D.; Reijers, H.A.; Mendling, J.; Bider, I.; Halpin, T.; Krogstie, J.; Nurcan, S.; Proper, E.; Schmidt, R.; Soffer, P.; Wrycza, S.

    2012-01-01

    Research on quality issues of business process models has recently begun to explore the process of creating process models. As a consequence, the question arises whether different ways of creating process models exist. In this vein, we observed 115 students engaged in the act of modeling, recording

  17. Energy distribution in selected fragment vibrations in dissociation processes in polyatomic molecules

    International Nuclear Information System (INIS)

    Band, Y.B.; Freed, K.F.

    1977-01-01

    The full quantum theory of dissociation processes in polyatomic molecules is converted to a form enabling the isolation of a selected fragment vibration. This form enables the easy evaluation of the probability distribution for energy partitioning between this vibration and all other degrees of freedom that results from the sudden Franck--Condon rearrangement process. The resultant Franck--Condon factors involve the square of the one-dimensional overlap integral between effective oscillator wavefunctions and the wavefunctions for the selected fragment vibration, a form that resembles the simple golden rule model for polyatomic dissociation and reaction processes. The full quantum theory can, therefore, be viewed as providing both a rigorous justification for certain generic aspects of the simple golden rule model as well as providing a number of important generalizations thereof. Some of these involve dealing with initial bound state vibrational excitation, explicit molecule, fragment and energy dependence of the effective oscillator, and the incorporation of all isotopic dependence. In certain limiting situations the full quantum theory yields simple, readily usable analytic expressions for the frequency and equilibrium position of the effective oscillator. Specific applications are presented for the direct photodissociation of HCN, DCN, and CO 2 where comparisons between the full theory and the simple golden rule are presented. We also discuss the generalizations of the previous theory to enable the incorporation of effects of distortion in the normal modes as a function of the reaction coordinate on the repulsive potential energy surface

  18. Pavement maintenance optimization model using Markov Decision Processes

    Science.gov (United States)

    Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.

    2017-09-01

    This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.

  19. Heat transfer model and finite element formulation for simulation of selective laser melting

    Science.gov (United States)

    Roy, Souvik; Juha, Mario; Shephard, Mark S.; Maniatty, Antoinette M.

    2017-10-01

    A novel approach and finite element formulation for modeling the melting, consolidation, and re-solidification process that occurs in selective laser melting additive manufacturing is presented. Two state variables are introduced to track the phase (melt/solid) and the degree of consolidation (powder/fully dense). The effect of the consolidation on the absorption of the laser energy into the material as it transforms from a porous powder to a dense melt is considered. A Lagrangian finite element formulation, which solves the governing equations on the unconsolidated reference configuration is derived, which naturally considers the effect of the changing geometry as the powder melts without needing to update the simulation domain. The finite element model is implemented into a general-purpose parallel finite element solver. Results are presented comparing to experimental results in the literature for a single laser track with good agreement. Predictions for a spiral laser pattern are also shown.

  20. Application of mechanistic models to fermentation and biocatalysis for next-generation processes

    DEFF Research Database (Denmark)

    Gernaey, Krist; Eliasson Lantz, Anna; Tufvesson, Pär

    2010-01-01

    of variables required for measurement, control and process design. In the near future, mechanistic models with a higher degree of detail will play key roles in the development of efficient next-generation fermentation and biocatalytic processes. Moreover, mechanistic models will be used increasingly......Mechanistic models are based on deterministic principles, and recently, interest in them has grown substantially. Herein we present an overview of mechanistic models and their applications in biotechnology, including future perspectives. Model utility is highlighted with respect to selection...

  1. The role of interviewers in job effective recruitment and selection processes

    Directory of Open Access Journals (Sweden)

    Kola O. Odeku

    2015-04-01

    Full Text Available Interview processes are dynamic and sometimes very sensitive and as such, they need to be managed effectively and efficiently by evaluating applicants equally without showing favour or prejudice prior, during and until all processes have been completed. A lot of interview processes for purposes of appointment selections have been tainted with unethical practices where the panellists, who took part in the processes, displayed various forms of partisanship, prejudices and so on. Sometimes, a selector may have premeditated negative mind set towards an applicant which may be evidenced during the interview. This may impact on the reasoning and judgements of the selector and the panellists, thus influencing the decisions of the selector. A brilliant and well performed applicant may be found unqualified Ineffective selection and recruitment processes are increasingly affecting employers by denting their cooperate image and sometimes being subjected to vicious legal battles in courts. This article examines the problems associated with prejudices and unethical practices during selection processes particularly by the recruiters and selectors. It points out that panellists must be properly scrutinised before they are appointed to be part of any selection process and that they should disclose any interest, prejudices, bias and so on that could affect the outcome of the process. It is argued that any member of the panel who is found to have compromised his or her position in any selection processes should be punitively sanctioned.

  2. Development of Electrically Switched Ion Exchange Process for Selective Ion Separations

    International Nuclear Information System (INIS)

    Rassat, Scot D.; Sukamto, Johanes H.; Orth, Rick J.; Lilga, Michael A.; Hallen, Richard T.

    1999-01-01

    The electrically switched ion exchange (ESIX) process, being developed at Pacific Northwest National Laboratory, provides an alternative separation method to selectively remove ions from process and waste streams. In the ESIX process, in which an electroactive ion exchange film is deposited onto a high surface area electrode, uptake and elution are controlled directly by modulating the electrochemical potential of the film. This paper addresses engineering issues necessary to fully develop ESIX for specific industrial alkali cation separation challenges. The cycling and chemical stability and alkali cation selectivity of nickel hexacyanoferrate (NiHCF) electroactive films were investigated. The selectivity of NiHCF was determined using cyclic voltammetry and a quartz crystal microbalance to quantify ion uptake in the film. Separation factors indicated a high selectivity for cesium and a moderate selectivity for potassium in high sodium content solutions. A NiHCF film with improved redox cycling and chemical stability in a simulated pulp mill process stream, a targeted application for ESIX, was also prepared and tested

  3. Feature-selective attention in healthy old age: a selective decline in selective attention?

    Science.gov (United States)

    Quigley, Cliodhna; Müller, Matthias M

    2014-02-12

    Deficient selection against irrelevant information has been proposed to underlie age-related cognitive decline. We recently reported evidence for maintained early sensory selection when older and younger adults used spatial selective attention to perform a challenging task. Here we explored age-related differences when spatial selection is not possible and feature-selective attention must be deployed. We additionally compared the integrity of feedforward processing by exploiting the well established phenomenon of suppression of visual cortical responses attributable to interstimulus competition. Electroencephalogram was measured while older and younger human adults responded to brief occurrences of coherent motion in an attended stimulus composed of randomly moving, orientation-defined, flickering bars. Attention was directed to horizontal or vertical bars by a pretrial cue, after which two orthogonally oriented, overlapping stimuli or a single stimulus were presented. Horizontal and vertical bars flickered at different frequencies and thereby elicited separable steady-state visual-evoked potentials, which were used to examine the effect of feature-based selection and the competitive influence of a second stimulus on ongoing visual processing. Age differences were found in feature-selective attentional modulation of visual responses: older adults did not show consistent modulation of magnitude or phase. In contrast, the suppressive effect of a second stimulus was robust and comparable in magnitude across age groups, suggesting that bottom-up processing of the current stimuli is essentially unchanged in healthy old age. Thus, it seems that visual processing per se is unchanged, but top-down attentional control is compromised in older adults when space cannot be used to guide selection.

  4. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data

    Science.gov (United States)

    Darmon, David

    2018-03-01

    In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.

  5. Physical and mathematical modelling of extrusion processes

    DEFF Research Database (Denmark)

    Arentoft, Mogens; Gronostajski, Z.; Niechajowics, A.

    2000-01-01

    The main objective of the work is to study the extrusion process using physical modelling and to compare the findings of the study with finite element predictions. The possibilities and advantages of the simultaneous application of both of these methods for the analysis of metal forming processes...

  6. Mathematical modeling of biological processes

    CERN Document Server

    Friedman, Avner

    2014-01-01

    This book on mathematical modeling of biological processes includes a wide selection of biological topics that demonstrate the power of mathematics and computational codes in setting up biological processes with a rigorous and predictive framework. Topics include: enzyme dynamics, spread of disease, harvesting bacteria, competition among live species, neuronal oscillations, transport of neurofilaments in axon, cancer and cancer therapy, and granulomas. Complete with a description of the biological background and biological question that requires the use of mathematics, this book is developed for graduate students and advanced undergraduate students with only basic knowledge of ordinary differential equations and partial differential equations; background in biology is not required. Students will gain knowledge on how to program with MATLAB without previous programming experience and how to use codes in order to test biological hypothesis.

  7. Calibration model maintenance in melamine resin production: Integrating drift detection, smart sample selection and model adaptation.

    Science.gov (United States)

    Nikzad-Langerodi, Ramin; Lughofer, Edwin; Cernuda, Carlos; Reischer, Thomas; Kantner, Wolfgang; Pawliczek, Marcin; Brandstetter, Markus

    2018-07-12

    The physico-chemical properties of Melamine Formaldehyde (MF) based thermosets are largely influenced by the degree of polymerization (DP) in the underlying resin. On-line supervision of the turbidity point by means of vibrational spectroscopy has recently emerged as a promising technique to monitor the DP of MF resins. However, spectroscopic determination of the DP relies on chemometric models, which are usually sensitive to drifts caused by instrumental and/or sample-associated changes occurring over time. In order to detect the time point when drifts start causing prediction bias, we here explore a universal drift detector based on a faded version of the Page-Hinkley (PH) statistic, which we test in three data streams from an industrial MF resin production process. We employ committee disagreement (CD), computed as the variance of model predictions from an ensemble of partial least squares (PLS) models, as a measure for sample-wise prediction uncertainty and use the PH statistic to detect changes in this quantity. We further explore supervised and unsupervised strategies for (semi-)automatic model adaptation upon detection of a drift. For the former, manual reference measurements are requested whenever statistical thresholds on Hotelling's T 2 and/or Q-Residuals are violated. Models are subsequently re-calibrated using weighted partial least squares in order to increase the influence of newer samples, which increases the flexibility when adapting to new (drifted) states. Unsupervised model adaptation is carried out exploiting the dual antecedent-consequent structure of a recently developed fuzzy systems variant of PLS termed FLEXFIS-PLS. In particular, antecedent parts are updated while maintaining the internal structure of the local linear predictors (i.e. the consequents). We found improved drift detection capability of the CD compared to Hotelling's T 2 and Q-Residuals when used in combination with the proposed PH test. Furthermore, we found that active

  8. Multivariate fault isolation of batch processes via variable selection in partial least squares discriminant analysis.

    Science.gov (United States)

    Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan

    2017-09-01

    In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Strategy for design NIR calibration sets based on process spectrum and model space: An innovative approach for process analytical technology.

    Science.gov (United States)

    Cárdenas, V; Cordobés, M; Blanco, M; Alcalà, M

    2015-10-10

    The pharmaceutical industry is under stringent regulations on quality control of their products because is critical for both, productive process and consumer safety. According to the framework of "process analytical technology" (PAT), a complete understanding of the process and a stepwise monitoring of manufacturing are required. Near infrared spectroscopy (NIRS) combined with chemometrics have lately performed efficient, useful and robust for pharmaceutical analysis. One crucial step in developing effective NIRS-based methodologies is selecting an appropriate calibration set to construct models affording accurate predictions. In this work, we developed calibration models for a pharmaceutical formulation during its three manufacturing stages: blending, compaction and coating. A novel methodology is proposed for selecting the calibration set -"process spectrum"-, into which physical changes in the samples at each stage are algebraically incorporated. Also, we established a "model space" defined by Hotelling's T(2) and Q-residuals statistics for outlier identification - inside/outside the defined space - in order to select objectively the factors to be used in calibration set construction. The results obtained confirm the efficacy of the proposed methodology for stepwise pharmaceutical quality control, and the relevance of the study as a guideline for the implementation of this easy and fast methodology in the pharma industry. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Comparing the Goodness of Different Statistical Criteria for Evaluating the Soil Water Infiltration Models

    Directory of Open Access Journals (Sweden)

    S. Mirzaee

    2016-02-01

    Full Text Available Introduction: The infiltration process is one of the most important components of the hydrologic cycle. Quantifying the infiltration water into soil is of great importance in watershed management. Prediction of flooding, erosion and pollutant transport all depends on the rate of runoff which is directly affected by the rate of infiltration. Quantification of infiltration water into soil is also necessary to determine the availability of water for crop growth and to estimate the amount of additional water needed for irrigation. Thus, an accurate model is required to estimate infiltration of water into soil. The ability of physical and empirical models in simulation of soil processes is commonly measured through comparisons of simulated and observed values. For these reasons, a large variety of indices have been proposed and used over the years in comparison of infiltration water into soil models. Among the proposed indices, some are absolute criteria such as the widely used root mean square error (RMSE, while others are relative criteria (i.e. normalized such as the Nash and Sutcliffe (1970 efficiency criterion (NSE. Selecting and using appropriate statistical criteria to evaluate and interpretation of the results for infiltration water into soil models is essential because each of the used criteria focus on specific types of errors. Also, descriptions of various goodness of fit indices or indicators including their advantages and shortcomings, and rigorous discussions on the suitability of each index are very important. The objective of this study is to compare the goodness of different statistical criteria to evaluate infiltration of water into soil models. Comparison techniques were considered to define the best models: coefficient of determination (R2, root mean square error (RMSE, efficiency criteria (NSEI and modified forms (such as NSEjI, NSESQRTI, NSElnI and NSEiI. Comparatively little work has been carried out on the meaning and

  11. Selecting public relations personnel of hospitals by analytic network process.

    Science.gov (United States)

    Liao, Sen-Kuei; Chang, Kuei-Lun

    2009-01-01

    This study describes the use of analytic network process (ANP) in the Taiwanese hospital public relations personnel selection process. Starting with interviewing 48 practitioners and executives in north Taiwan, we collected selection criteria. Then, we retained the 12 critical criteria that were mentioned above 40 times by theses respondents, including: interpersonal skill, experience, negotiation, language, ability to follow orders, cognitive ability, adaptation to environment, adaptation to company, emotion, loyalty, attitude, and Response. Finally, we discussed with the 20 executives to take these important criteria into three perspectives to structure the hierarchy for hospital public relations personnel selection. After discussing with practitioners and executives, we find that selecting criteria are interrelated. The ANP, which incorporates interdependence relationships, is a new approach for multi-criteria decision-making. Thus, we apply ANP to select the most optimal public relations personnel of hospitals. An empirical study of public relations personnel selection problems in Taiwan hospitals is conducted to illustrate how the selection procedure works.

  12. Process modeling style

    CERN Document Server

    Long, John

    2014-01-01

    Process Modeling Style focuses on other aspects of process modeling beyond notation that are very important to practitioners. Many people who model processes focus on the specific notation used to create their drawings. While that is important, there are many other aspects to modeling, such as naming, creating identifiers, descriptions, interfaces, patterns, and creating useful process documentation. Experience author John Long focuses on those non-notational aspects of modeling, which practitioners will find invaluable. Gives solid advice for creating roles, work produ

  13. Experimental Investigation of Comparative Process Capabilities of Metal and Ceramic Injection Molding for Precision Applications

    DEFF Research Database (Denmark)

    Islam, Aminul; Giannekas, Nikolaos; Marhöfer, David Maximilian

    2016-01-01

    and discussion presented in the paper will be useful for thorough understanding of the MIM and CIM processes and to select the right material and process for the right application or even to combine metal and ceramic materials by molding to produce metal–ceramic hybrid components.......The purpose of this paper is to make a comparative study on the process capabilities of the two branches of the powder injection molding (PIM) process—metal injection molding (MIM) and ceramic injection molding (CIM), for high-end precision applications. The state-of-the-art literature does...

  14. Intermediate product selection and blending in the food processing industry

    DEFF Research Database (Denmark)

    Kilic, Onur A.; Akkerman, Renzo; van Donk, Dirk Pieter

    2013-01-01

    This study addresses a capacitated intermediate product selection and blending problem typical for two-stage production systems in the food processing industry. The problem involves the selection of a set of intermediates and end-product recipes characterising how those selected intermediates...

  15. A Graphic Overlay Method for Selection of Osteotomy Site in Chronic Radial Head Dislocation: An Evaluation of 3D-printed Bone Models.

    Science.gov (United States)

    Kim, Hui Taek; Ahn, Tae Young; Jang, Jae Hoon; Kim, Kang Hee; Lee, Sung Jae; Jung, Duk Young

    2017-03-01

    Three-dimensional (3D) computed tomography imaging is now being used to generate 3D models for planning orthopaedic surgery, but the process remains time consuming and expensive. For chronic radial head dislocation, we have designed a graphic overlay approach that employs selected 3D computer images and widely available software to simplify the process of osteotomy site selection. We studied 5 patients (2 traumatic and 3 congenital) with unilateral radial head dislocation. These patients were treated with surgery based on traditional radiographs, but they also had full sets of 3D CT imaging done both before and after their surgery: these 3D CT images form the basis for this study. From the 3D CT images, each patient generated 3 sets of 3D-printed bone models: 2 copies of the preoperative condition, and 1 copy of the postoperative condition. One set of the preoperative models was then actually osteotomized and fixed in the manner suggested by our graphic technique. Arcs of rotation of the 3 sets of 3D-printed bone models were then compared. Arcs of rotation of the 3 groups of bone models were significantly different, with the models osteotomized accordingly to our graphic technique having the widest arcs. For chronic radial head dislocation, our graphic overlay approach simplifies the selection of the osteotomy site(s). Three-dimensional-printed bone models suggest that this approach could improve range of motion of the forearm in actual surgical practice. Level IV-therapeutic study.

  16. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.

    Science.gov (United States)

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David

    2018-07-01

    To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP

  17. A comparative analysis for multiattribute selection among renewable energy alternatives using fuzzy axiomatic design and fuzzy analytic hierarchy process

    Energy Technology Data Exchange (ETDEWEB)

    Kahraman, Cengiz; Kaya, Ihsan; Cebi, Selcuk [Istanbul Technical University, Department of Industrial Engineering, 34367, Macka-Istanbul (Turkey)

    2009-10-15

    Renewable energy is the energy generated from natural resources such as sunlight, wind, rain, tides and geothermal heat which are renewable. Energy resources are very important in perspective of economics and politics for all countries. Hence, the selection of the best alternative for any country takes an important role for energy investments. Among decision-making methodologies, axiomatic design (AD) and analytic hierarchy process (AHP) are often used in the literature. The fuzzy set theory is a powerful tool to treat the uncertainty in case of incomplete or vague information. In this paper, fuzzy multicriteria decision- making methodologies are suggested for the selection among renewable energy alternatives. The first methodology is based on the AHP which allows the evaluation scores from experts to be linguistic expressions, crisp, or fuzzy numbers, while the second is based on AD principles under fuzziness which evaluates the alternatives under objective or subjective criteria with respect to the functional requirements obtained from experts. The originality of the paper comes from the fuzzy AD application to the selection of the best renewable energy alternative and the comparison with fuzzy AHP. In the application of the proposed methodologies the most appropriate renewable energy alternative is determined for Turkey. (author)

  18. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  19. Transitional processes: Territorial organization of authorities and the future constitution of Serbia comparative analysis of five constitutional models

    Directory of Open Access Journals (Sweden)

    Despotović Ljubiša M.

    2004-01-01

    Full Text Available In this paper the authors give a comparative analysis of territorial organization of authorities in five constitutional models for Serbia. The paper consists of the following chapters: Introduction, Outline of the Constitution of Kingdom of Serbia, Basic Principles of the New Constitution of Serbia - DSS, Outline of Constitution of Republic of Serbia - DS Constitutional Solutions for Serbia - BCLJP, Project of Constitution of Republic of Serbia - Forum iuris, Conclusion. The analysis of territorial organization of authorities has been seen in the context of the processes of transition and archiving the important principles of civil society and civil autonomies.

  20. Intermediate product selection and blending in the food processing industry

    NARCIS (Netherlands)

    Kilic, Onur A.; Akkerman, Renzo; van Donk, Dirk Pieter; Grunow, Martin

    2013-01-01

    This study addresses a capacitated intermediate product selection and blending problem typical for two-stage production systems in the food processing industry. The problem involves the selection of a set of intermediates and end-product recipes characterising how those selected intermediates are

  1. Comparing single-tree selection, group selection, and clearcutting for regenerating oaks and pines in the Missouri Ozarks

    Science.gov (United States)

    Randy G. Jensen; John M. Kabrick

    2008-01-01

    In the Missouri Ozarks, there is considerable concern about the effectiveness of the uneven-aged methods of single-tree selection and group selection for oak (Quercus L.) and shortleaf pine (Pinus echinata Mill.) regeneration. We compared the changes in reproduction density of oaks and pine following harvesting by single-tree...

  2. Forward and Reverse Process Models for the Squeeze Casting Process Using Neural Network Based Approaches

    Directory of Open Access Journals (Sweden)

    Manjunath Patel Gowdru Chandrashekarappa

    2014-01-01

    Full Text Available The present research work is focussed to develop an intelligent system to establish the input-output relationship utilizing forward and reverse mappings of artificial neural networks. Forward mapping aims at predicting the density and secondary dendrite arm spacing (SDAS from the known set of squeeze cast process parameters such as time delay, pressure duration, squeezes pressure, pouring temperature, and die temperature. An attempt is also made to meet the industrial requirements of developing the reverse model to predict the recommended squeeze cast parameters for the desired density and SDAS. Two different neural network based approaches have been proposed to carry out the said task, namely, back propagation neural network (BPNN and genetic algorithm neural network (GA-NN. The batch mode of training is employed for both supervised learning networks and requires huge training data. The requirement of huge training data is generated artificially at random using regression equation derived through real experiments carried out earlier by the same authors. The performances of BPNN and GA-NN models are compared among themselves with those of regression for ten test cases. The results show that both models are capable of making better predictions and the models can be effectively used in shop floor in selection of most influential parameters for the desired outputs.

  3. Braze alloy process and strength characterization studies for 18 nickel grade 200 maraging steel with application to wind tunnel models

    Science.gov (United States)

    Bradshaw, James F.; Sandefur, Paul G., Jr.; Young, Clarence P., Jr.

    1991-01-01

    A comprehensive study of braze alloy selection process and strength characterization with application to wind tunnel models is presented. The applications for this study include the installation of stainless steel pressure tubing in model airfoil sections make of 18 Ni 200 grade maraging steel and the joining of wing structural components by brazing. Acceptable braze alloys for these applications are identified along with process, thermal braze cycle data, and thermal management procedures. Shear specimens are used to evaluate comparative shear strength properties for the various alloys at both room and cryogenic (-300 F) temperatures and include the effects of electroless nickel plating. Nickel plating was found to significantly enhance both the wetability and strength properties for the various braze alloys studied. The data are provided for use in selecting braze alloys for use with 18 Ni grade 200 steel in the design of wind tunnel models to be tested in an ambient or cryogenic environment.

  4. Integrated Site Model Process Model Report

    International Nuclear Information System (INIS)

    Booth, T.

    2000-01-01

    The Integrated Site Model (ISM) provides a framework for discussing the geologic features and properties of Yucca Mountain, which is being evaluated as a potential site for a geologic repository for the disposal of nuclear waste. The ISM is important to the evaluation of the site because it provides 3-D portrayals of site geologic, rock property, and mineralogic characteristics and their spatial variabilities. The ISM is not a single discrete model; rather, it is a set of static representations that provide three-dimensional (3-D), computer representations of site geology, selected hydrologic and rock properties, and mineralogic-characteristics data. These representations are manifested in three separate model components of the ISM: the Geologic Framework Model (GFM), the Rock Properties Model (RPM), and the Mineralogic Model (MM). The GFM provides a representation of the 3-D stratigraphy and geologic structure. Based on the framework provided by the GFM, the RPM and MM provide spatial simulations of the rock and hydrologic properties, and mineralogy, respectively. Functional summaries of the component models and their respective output are provided in Section 1.4. Each of the component models of the ISM considers different specific aspects of the site geologic setting. Each model was developed using unique methodologies and inputs, and the determination of the modeled units for each of the components is dependent on the requirements of that component. Therefore, while the ISM represents the integration of the rock properties and mineralogy into a geologic framework, the discussion of ISM construction and results is most appropriately presented in terms of the three separate components. This Process Model Report (PMR) summarizes the individual component models of the ISM (the GFM, RPM, and MM) and describes how the three components are constructed and combined to form the ISM

  5. The application of the analytic hierarchy process (AHP) in uranium mine mining method of the optimal selection

    International Nuclear Information System (INIS)

    Tan Zhongyin; Kuang Zhengping; Qiu Huiyuan

    2014-01-01

    Analytic hierarchy process, AHP, is a combination of qualitative and quantitative, systematic and hierarchical analysis method. Basic decision theory of analytic hierarchy process is applied in this article, with a project example in north Guangdong region as the research object, the in-situ mining method optimization choose hierarchical analysis model is established and the analysis method, The results show that, the AHP model for mining method selecting model was reliable, optimization results were conformity with the actual use of the in-situ mining method, and it has better practicability. (authors)

  6. Use of strategic environmental assessment in the site selection process for a radioactive waste disposal facility in Slovenia.

    Science.gov (United States)

    Dermol, Urška; Kontić, Branko

    2011-01-01

    The benefits of strategic environmental considerations in the process of siting a repository for low- and intermediate-level radioactive waste (LILW) are presented. The benefits have been explored by analyzing differences between the two site selection processes. One is a so-called official site selection process, which is implemented by the Agency for radwaste management (ARAO); the other is an optimization process suggested by experts working in the area of environmental impact assessment (EIA) and land-use (spatial) planning. The criteria on which the comparison of the results of the two site selection processes has been based are spatial organization, environmental impact, safety in terms of potential exposure of the population to radioactivity released from the repository, and feasibility of the repository from the technical, financial/economic and social point of view (the latter relates to consent by the local community for siting the repository). The site selection processes have been compared with the support of the decision expert system named DEX. The results of the comparison indicate that the sites selected by ARAO meet fewer suitability criteria than those identified by applying strategic environmental considerations in the framework of the optimization process. This result stands when taking into account spatial, environmental, safety and technical feasibility points of view. Acceptability of a site by a local community could not have been tested, since the formal site selection process has not yet been concluded; this remains as an uncertain and open point of the comparison. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Communication activities for NUMO's site selection process

    International Nuclear Information System (INIS)

    Takeuchi, Mitsuo; Okuyama, Shigeru; Kitayama, Kazumi; Kuba, Michiyoshi

    2004-01-01

    A siting program for geological disposal of high-level radioactive waste (HLW) in Japan has just started and is moving into a new stage of communication with the public. A final repository site will be selected via a stepwise process, as stipulated in the Specified Radioactive Waste Final Disposal Act promulgated in June 2000. Based on the Act, the site selection process of the Nuclear Waste Management Organization of Japan (NUMO, established in October 2000) will be carried out in the three steps: selection of Preliminary Investigation Areas (PIAs), selection of Detailed Investigation Areas (DIAs) and selection of the Repository Site. The Act also defines NUMO's responsibilities in terms of implementing the HLW disposal program in an open and transparent manner. NUMO fully understands the importance of public participation in its activities and is aiming to promote public involvement in the process of site selection based on a fundamental policy, which consists of 'adopting a stepwise approach', 'respecting the initiative of municipalities' and 'ensuring transparency in information disclosure'. This policy is clearly reflected in the adoption of an open solicitation approach for volunteer municipalities for Preliminary Investigation Areas (PIAs). NUMO made the official announcement of the start of its open solicitation program on 19 December 2002. This paper outlines how NUMO's activities are currently carried out with a view to encouraging municipalities to volunteer as PIAs and how public awareness of the safety of the HLW disposal is evaluated at this stage

  8. Modeling Natural Selection

    Science.gov (United States)

    Bogiages, Christopher A.; Lotter, Christine

    2011-01-01

    In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…

  9. From scenarios to domain models: processes and representations

    Science.gov (United States)

    Haddock, Gail; Harbison, Karan

    1994-03-01

    The domain specific software architectures (DSSA) community has defined a philosophy for the development of complex systems. This philosophy improves productivity and efficiency by increasing the user's role in the definition of requirements, increasing the systems engineer's role in the reuse of components, and decreasing the software engineer's role to the development of new components and component modifications only. The scenario-based engineering process (SEP), the first instantiation of the DSSA philosophy, has been adopted by the next generation controller project. It is also the chosen methodology of the trauma care information management system project, and the surrogate semi-autonomous vehicle project. SEP uses scenarios from the user to create domain models and define the system's requirements. Domain knowledge is obtained from a variety of sources including experts, documents, and videos. This knowledge is analyzed using three techniques: scenario analysis, task analysis, and object-oriented analysis. Scenario analysis results in formal representations of selected scenarios. Task analysis of the scenario representations results in descriptions of tasks necessary for object-oriented analysis and also subtasks necessary for functional system analysis. Object-oriented analysis of task descriptions produces domain models and system requirements. This paper examines the representations that support the DSSA philosophy, including reference requirements, reference architectures, and domain models. The processes used to create and use the representations are explained through use of the scenario-based engineering process. Selected examples are taken from the next generation controller project.

  10. Is it Worth Comparing Different Bankruptcy Models?

    Directory of Open Access Journals (Sweden)

    Miroslava Dolejšová

    2015-01-01

    Full Text Available The aim of this paper is to compare the performance of small enterprises in the Zlín and Olomouc Regions. These enterprises were assessed using the Altman Z-Score model, the IN05 model, the Zmijewski model and the Springate model. The batch selected for this analysis included 16 enterprises from the Zlín Region and 16 enterprises from the Olomouc Region. Financial statements subjected to the analysis are from 2006 and 2010. The statistical data analysis was performed using the one-sample z-test for proportions and the paired t-test. The outcomes of the evaluation run using the Altman Z-Score model, the IN05 model and the Springate model revealed the enterprises to be financially sound, but the Zmijewski model identified them as being insolvent. The one-sample z-test for proportions confirmed that at least 80% of these enterprises show a sound financial condition. A comparison of all models has emphasized the substantial difference produced by the Zmijewski model. The paired t-test showed that the financial performance of small enterprises had remained the same during the years involved. It is recommended that small enterprises assess their financial performance using two different bankruptcy models. They may wish to combine the Zmijewski model with any bankruptcy model (the Altman Z-Score model, the IN05 model or the Springate model to ensure a proper method of analysis.

  11. Aplication of AHP method in partner's selection process for supply chain development

    Directory of Open Access Journals (Sweden)

    Barac Nada

    2012-06-01

    Full Text Available The process of developing a supply chain is long and complex. with many restrictions and obstacles that accompany it. In this paper the authors focus on the first stage in developing the supply chain and the selection process and selection of partners. This phase of the development significantly affect the competitive position of the supply chain and create value for the consumer. Selected partners or 'links' of the supply chain influence the future performance of the chain which points to the necessity of full commitment to this process. The process of selection and choice of partner is conditioned by the key criteria that are used on that occasion. The use of inadequate criteria may endanger the whole process of building a supply chain partner selection through inadequate future supply chain needs. This paper is an analysis of partner selection based on key criteria used by managers in Serbia. For this purpose we used the AHP method. the results show that these are the top ranked criteria in terms of managers.

  12. Maximum entropy perception-action space: a Bayesian model of eye movement selection

    OpenAIRE

    Colas , Francis; Bessière , Pierre; Girard , Benoît

    2010-01-01

    International audience; In this article, we investigate the issue of the selection of eye movements in a free-eye Multiple Object Tracking task. We propose a Bayesian model of retinotopic maps with a complex logarithmic mapping. This model is structured in two parts: a representation of the visual scene, and a decision model based on the representation. We compare different decision models based on different features of the representation and we show that taking into account uncertainty helps...

  13. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    Directory of Open Access Journals (Sweden)

    Hui-qiang Ma

    2014-01-01

    Full Text Available We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance efficient frontier analytically. The results show that the mean-variance efficient frontier is still a parabola in the mean-variance plane, and the optimal strategies depend not only on the total wealth but also on the stock price. Moreover, some numerical examples are given to analyze the sensitivity of the efficient frontier with respect to the elasticity parameter and to illustrate the results presented in this paper. The numerical results show that the price of risk decreases as the elasticity coefficient increases.

  14. Continuing professional education and the selection of candidates: the case for a tripartite model.

    Science.gov (United States)

    Ellis, L B

    2000-02-01

    This paper argues the case for a tripartite model involving the manager educator and practitioner in the selection of candidates to programmes of continuing professional education (CPE). Nurse educators are said to play a key link in the education practice chain (Pendleton & Myles 1991), yet with the introduction of a market philosophy for education, the educator appears to have little, if any, influence over the selection of CPE candidates. Empirical studies on the value of an effective system for identifying the educational needs of the individual and the locality are unequivocal in specifying the benefits of a collaborative selection process (Larcombe & Maggs 1991). However, there are few studies that offer a model of collaboration and fewer still on how to operationalize such a model. This paper presents the policy and legislative context of CPE leading to the development of a market philosophy. The tension between educational reforms such as life-long learning and diminishing and finite resources are highlighted. These strategic issues provide the backdrop and rationale for considering the process for identifying CPE needs, and the characteristics of an effective system as suggested in the literature. Finally, this paper outlines recommendations for a partnership between the manager practitioner and educationalist in the selection of CPE candidates.

  15. Statistical power of model selection strategies for genome-wide association studies.

    Directory of Open Access Journals (Sweden)

    Zheyang Wu

    2009-07-01

    Full Text Available Genome-wide association studies (GWAS aim to identify genetic variants related to diseases by examining the associations between phenotypes and hundreds of thousands of genotyped markers. Because many genes are potentially involved in common diseases and a large number of markers are analyzed, it is crucial to devise an effective strategy to identify truly associated variants that have individual and/or interactive effects, while controlling false positives at the desired level. Although a number of model selection methods have been proposed in the literature, including marginal search, exhaustive search, and forward search, their relative performance has only been evaluated through limited simulations due to the lack of an analytical approach to calculating the power of these methods. This article develops a novel statistical approach for power calculation, derives accurate formulas for the power of different model selection strategies, and then uses the formulas to evaluate and compare these strategies in genetic model spaces. In contrast to previous studies, our theoretical framework allows for random genotypes, correlations among test statistics, and a false-positive control based on GWAS practice. After the accuracy of our analytical results is validated through simulations, they are utilized to systematically evaluate and compare the performance of these strategies in a wide class of genetic models. For a specific genetic model, our results clearly reveal how different factors, such as effect size, allele frequency, and interaction, jointly affect the statistical power of each strategy. An example is provided for the application of our approach to empirical research. The statistical approach used in our derivations is general and can be employed to address the model selection problems in other random predictor settings. We have developed an R package markerSearchPower to implement our formulas, which can be downloaded from the

  16. [Location selection for Shenyang urban parks based on GIS and multi-objective location allocation model].

    Science.gov (United States)

    Zhou, Yuan; Shi, Tie-Mao; Hu, Yuan-Man; Gao, Chang; Liu, Miao; Song, Lin-Qi

    2011-12-01

    Based on geographic information system (GIS) technology and multi-objective location-allocation (LA) model, and in considering of four relatively independent objective factors (population density level, air pollution level, urban heat island effect level, and urban land use pattern), an optimized location selection for the urban parks within the Third Ring of Shenyang was conducted, and the selection results were compared with the spatial distribution of existing parks, aimed to evaluate the rationality of the spatial distribution of urban green spaces. In the location selection of urban green spaces in the study area, the factor air pollution was most important, and, compared with single objective factor, the weighted analysis results of multi-objective factors could provide optimized spatial location selection of new urban green spaces. The combination of GIS technology with LA model would be a new approach for the spatial optimizing of urban green spaces.

  17. Parameters in selective laser melting for processing metallic powders

    Science.gov (United States)

    Kurzynowski, Tomasz; Chlebus, Edward; Kuźnicka, Bogumiła; Reiner, Jacek

    2012-03-01

    The paper presents results of studies on Selective Laser Melting. SLM is an additive manufacturing technology which may be used to process almost all metallic materials in the form of powder. Types of energy emission sources, mainly fiber lasers and/or Nd:YAG laser with similar characteristics and the wavelength of 1,06 - 1,08 microns, are provided primarily for processing metallic powder materials with high absorption of laser radiation. The paper presents results of selected variable parameters (laser power, scanning time, scanning strategy) and fixed parameters such as the protective atmosphere (argon, nitrogen, helium), temperature, type and shape of the powder material. The thematic scope is very broad, so the work was focused on optimizing the process of selective laser micrometallurgy for producing fully dense parts. The density is closely linked with other two conditions: discontinuity of the microstructure (microcracks) and stability (repeatability) of the process. Materials used for the research were stainless steel 316L (AISI), tool steel H13 (AISI), and titanium alloy Ti6Al7Nb (ISO 5832-11). Studies were performed with a scanning electron microscope, a light microscopes, a confocal microscope and a μCT scanner.

  18. Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach

    Science.gov (United States)

    Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.

    Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.

  19. Selection of Models for Ingestion Pathway and Relocation Radii Determination

    International Nuclear Information System (INIS)

    Blanchard, A.

    1998-01-01

    The distance at which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models were considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities

  20. Mutation-selection models of codon substitution and their use to estimate selective strengths on codon usage

    DEFF Research Database (Denmark)

    Yang, Ziheng; Nielsen, Rasmus

    2008-01-01

    Current models of codon substitution are formulated at the levels of nucleotide substitution and do not explicitly consider the separate effects of mutation and selection. They are thus incapable of inferring whether mutation or selection is responsible for evolution at silent sites. Here we impl...... codon usage in mammals. Estimates of selection coefficients nevertheless suggest that selection on codon usage is weak and most mutations are nearly neutral. The sensitivity of the analysis on the assumed mutation model is discussed.......Current models of codon substitution are formulated at the levels of nucleotide substitution and do not explicitly consider the separate effects of mutation and selection. They are thus incapable of inferring whether mutation or selection is responsible for evolution at silent sites. Here we...... implement a few population genetics models of codon substitution that explicitly consider mutation bias and natural selection at the DNA level. Selection on codon usage is modeled by introducing codon-fitness parameters, which together with mutation-bias parameters, predict optimal codon frequencies...

  1. Key Process Uncertainties in Soil Carbon Dynamics: Comparing Multiple Model Structures and Observational Meta-analysis

    Science.gov (United States)

    Sulman, B. N.; Moore, J.; Averill, C.; Abramoff, R. Z.; Bradford, M.; Classen, A. T.; Hartman, M. D.; Kivlin, S. N.; Luo, Y.; Mayes, M. A.; Morrison, E. W.; Riley, W. J.; Salazar, A.; Schimel, J.; Sridhar, B.; Tang, J.; Wang, G.; Wieder, W. R.

    2016-12-01

    Soil carbon (C) dynamics are crucial to understanding and predicting C cycle responses to global change and soil C modeling is a key tool for understanding these dynamics. While first order model structures have historically dominated this area, a recent proliferation of alternative model structures representing different assumptions about microbial activity and mineral protection is providing new opportunities to explore process uncertainties related to soil C dynamics. We conducted idealized simulations of soil C responses to warming and litter addition using models from five research groups that incorporated different sets of assumptions about processes governing soil C decomposition and stabilization. We conducted a meta-analysis of published warming and C addition experiments for comparison with simulations. Assumptions related to mineral protection and microbial dynamics drove strong differences among models. In response to C additions, some models predicted long-term C accumulation while others predicted transient increases that were counteracted by accelerating decomposition. In experimental manipulations, doubling litter addition did not change soil C stocks in studies spanning as long as two decades. This result agreed with simulations from models with strong microbial growth responses and limited mineral sorption capacity. In observations, warming initially drove soil C loss via increased CO2 production, but in some studies soil C rebounded and increased over decadal time scales. In contrast, all models predicted sustained C losses under warming. The disagreement with experimental results could be explained by physiological or community-level acclimation, or by warming-related changes in plant growth. In addition to the role of microbial activity, assumptions related to mineral sorption and protected C played a key role in driving long-term model responses. In general, simulations were similar in their initial responses to perturbations but diverged over

  2. Multi-indication Pharmacotherapeutic Multicriteria Decision Analytic Model for the Comparative Formulary Inclusion of Proton Pump Inhibitors in Qatar.

    Science.gov (United States)

    Al-Badriyeh, Daoud; Alabbadi, Ibrahim; Fahey, Michael; Al-Khal, Abdullatif; Zaidan, Manal

    2016-05-01

    The formulary inclusion of proton pump inhibitors (PPIs) in the government hospital health services in Qatar is not comparative or restricted. Requests to include a PPI in the formulary are typically accepted if evidence of efficacy and tolerability is presented. There are no literature reports of a PPI scoring model that is based on comparatively weighted multiple indications and no reports of PPI selection in Qatar or the Middle East. This study aims to compare first-line use of the PPIs that exist in Qatar. The economic effect of the study recommendations was also quantified. A comparative, evidence-based multicriteria decision analysis (MCDA) model was constructed to follow the multiple indications and pharmacotherapeutic criteria of PPIs. Literature and an expert panel informed the selection criteria of PPIs. Input from the relevant local clinician population steered the relative weighting of selection criteria. Comparatively scored PPIs, exceeding a defined score threshold, were recommended for selection. Weighted model scores were successfully developed, with 95% CI and 5% margin of error. The model comprised 7 main criteria and 38 subcriteria. Main criteria are indication, dosage frequency, treatment duration, best published evidence, available formulations, drug interactions, and pharmacokinetic and pharmacodynamic properties. Most weight was achieved for the indications selection criteria. Esomeprazole and rabeprazole were suggested as formulary options, followed by lansoprazole for nonformulary use. The estimated effect of the study recommendations was up to a 15.3% reduction in the annual PPI expenditure. Robustness of study conclusions against variabilities in study inputs was confirmed via sensitivity analyses. The implementation of a locally developed PPI-specific comparative MCDA scoring model, which is multiweighted indication and criteria based, into the Qatari formulary selection practices is a successful evidence-based cost-cutting exercise

  3. Optimization‐based framework for resin selection strategies in biopharmaceutical purification process development

    Science.gov (United States)

    Liu, Songsong; Gerontas, Spyridon; Gruber, David; Turner, Richard; Titchener‐Hooker, Nigel J.

    2017-01-01

    This work addresses rapid resin selection for integrated chromatographic separations when conducted as part of a high‐throughput screening exercise during the early stages of purification process development. An optimization‐based decision support framework is proposed to process the data generated from microscale experiments to identify the best resins to maximize key performance metrics for a biopharmaceutical manufacturing process, such as yield and purity. A multiobjective mixed integer nonlinear programming model is developed and solved using the ε‐constraint method. Dinkelbach's algorithm is used to solve the resulting mixed integer linear fractional programming model. The proposed framework is successfully applied to an industrial case study of a process to purify recombinant Fc Fusion protein from low molecular weight and high molecular weight product related impurities, involving two chromatographic steps with eight and three candidate resins for each step, respectively. The computational results show the advantage of the proposed framework in terms of computational efficiency and flexibility. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 33:1116–1126, 2017 PMID:28393478

  4. Scalable gastroscopic video summarization via similar-inhibition dictionary selection.

    Science.gov (United States)

    Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin

    2016-01-01

    This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. A Biologically Inspired Computational Model of Basal Ganglia in Action Selection.

    Science.gov (United States)

    Baston, Chiara; Ursino, Mauro

    2015-01-01

    The basal ganglia (BG) are a subcortical structure implicated in action selection. The aim of this work is to present a new cognitive neuroscience model of the BG, which aspires to represent a parsimonious balance between simplicity and completeness. The model includes the 3 main pathways operating in the BG circuitry, that is, the direct (Go), indirect (NoGo), and hyperdirect pathways. The main original aspects, compared with previous models, are the use of a two-term Hebb rule to train synapses in the striatum, based exclusively on neuronal activity changes caused by dopamine peaks or dips, and the role of the cholinergic interneurons (affected by dopamine themselves) during learning. Some examples are displayed, concerning a few paradigmatic cases: action selection in basal conditions, action selection in the presence of a strong conflict (where the role of the hyperdirect pathway emerges), synapse changes induced by phasic dopamine, and learning new actions based on a previous history of rewards and punishments. Finally, some simulations show model working in conditions of altered dopamine levels, to illustrate pathological cases (dopamine depletion in parkinsonian subjects or dopamine hypermedication). Due to its parsimonious approach, the model may represent a straightforward tool to analyze BG functionality in behavioral experiments.

  6. A Biologically Inspired Computational Model of Basal Ganglia in Action Selection

    Directory of Open Access Journals (Sweden)

    Chiara Baston

    2015-01-01

    Full Text Available The basal ganglia (BG are a subcortical structure implicated in action selection. The aim of this work is to present a new cognitive neuroscience model of the BG, which aspires to represent a parsimonious balance between simplicity and completeness. The model includes the 3 main pathways operating in the BG circuitry, that is, the direct (Go, indirect (NoGo, and hyperdirect pathways. The main original aspects, compared with previous models, are the use of a two-term Hebb rule to train synapses in the striatum, based exclusively on neuronal activity changes caused by dopamine peaks or dips, and the role of the cholinergic interneurons (affected by dopamine themselves during learning. Some examples are displayed, concerning a few paradigmatic cases: action selection in basal conditions, action selection in the presence of a strong conflict (where the role of the hyperdirect pathway emerges, synapse changes induced by phasic dopamine, and learning new actions based on a previous history of rewards and punishments. Finally, some simulations show model working in conditions of altered dopamine levels, to illustrate pathological cases (dopamine depletion in parkinsonian subjects or dopamine hypermedication. Due to its parsimonious approach, the model may represent a straightforward tool to analyze BG functionality in behavioral experiments.

  7. Model of diffusers / permeators for hydrogen processing

    International Nuclear Information System (INIS)

    Jacobs, W. D.; Hang, T.

    2008-01-01

    Palladium-silver (Pd-Ag) diffusers are mainstays of hydrogen processing. Diffusers separate hydrogen from inert species such as nitrogen, argon or helium. The tubing becomes permeable to hydrogen when heated to more than 250 C and a differential pressure is created across the membrane. The hydrogen diffuses better at higher temperatures. Experimental or experiential results have been the basis for determining or predicting a diffuser's performance. However, the process can be mathematically modeled, and comparison to experimental or other operating data can be utilized to improve the fit of the model. A reliable model-based diffuser system design is the goal which will have impacts on tritium and hydrogen processing. A computer model has been developed to solve the differential equations for diffusion given the operating boundary conditions. The model was compared to operating data for a low pressure diffuser system. The modeling approach and the results are presented in this paper. (authors)

  8. A Computational Model of Selection by Consequences

    Science.gov (United States)

    McDowell, J. J.

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…

  9. Selection of Representative Models for Decision Analysis Under Uncertainty

    Science.gov (United States)

    Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.

    2016-03-01

    The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.

  10. Conflict between public perceptions and technical processes in site selection

    International Nuclear Information System (INIS)

    Avant, R.V. Jr.; Jacobi, L.R.

    1985-01-01

    U.S. Nuclear Regulatory Commission regulations and guidance on site selection are based on sound technical reasoning. Geology, hydrology, flora and fauna, transportation, demographics, and sociopolitical concerns, to name a few, have been factored into the process. Regardless of the technical objectivity of a site selection process, local opposition groups will challenge technical decisions using technical, nontechnical, and emotional arguments. This paper explores the many conflicts between public perceptions, technical requirements designed to protect the general public, and common arguments against site selection. Ways to deal with opposition are also discussed with emphasis placed on developing effective community relations

  11. Fabrication of Li_2TiO_3 pebbles by a selective laser sintering process

    International Nuclear Information System (INIS)

    Zhou, Qilai; Gao, Yue; Liu, Kai; Xue, Lihong; Yan, Youwei

    2015-01-01

    Highlights: • Selective laser sintering (SLS) is employed to fabricate ceramic pebbles. • Quantities and diameter of the pebbles could be easily controlled by adjusting the model of pebbles. • All the pebbles could be prepared at a time within several minutes. • The Li_2TiO_3 pebbles sintered at 1100 °C show a notable crush load of 43 N. - Abstract: Lithium titanate, Li_2TiO_3, is an important tritium breeding material for deuterium (D)–tritium (T) fusion reactor. In test blanket module (TBM) design of China, Li_2TiO_3 is considered as one candidate material of tritium breeders. In this study, selective laser sintering (SLS) technology was introduced to fabricate Li_2TiO_3 ceramic pebbles. This fabrication process is computer assisted and has a high level of flexibility. Li_2TiO_3 powder with a particle size of 1–3 μm was used as the raw material, whilst epoxy resin E06 was adopted as a binder. Green Li_2TiO_3 pebbles with certain strengths were successfully prepared via SLS. Density of the green pebbles was subsequently increased by cold isostatic pressing (CIP) process. Li_2TiO_3 pebbles with a diameter of about 2 mm were obtained after high temperature sintering. Density of the pebbles reaches 80% of theoretical density (TD) with a comparable crush load of 43 N. This computer assisted approach provides a new efficient route for the production of Li_2TiO_3 ceramic pebbles.

  12. Attribute based selection of thermoplastic resin for vacuum infusion process

    DEFF Research Database (Denmark)

    Prabhakaran, R.T. Durai; Lystrup, Aage; Løgstrup Andersen, Tom

    2011-01-01

    The composite industry looks toward a new material system (resins) based on thermoplastic polymers for the vacuum infusion process, similar to the infusion process using thermosetting polymers. A large number of thermoplastics are available in the market with a variety of properties suitable...... for different engineering applications, and few of those are available in a not yet polymerised form suitable for resin infusion. The proper selection of a new resin system among these thermoplastic polymers is a concern for manufactures in the current scenario and a special mathematical tool would...... be beneficial. In this paper, the authors introduce a new decision making tool for resin selection based on significant attributes. This article provides a broad overview of suitable thermoplastic material systems for vacuum infusion process available in today’s market. An illustrative example—resin selection...

  13. Cut Based Method for Comparing Complex Networks.

    Science.gov (United States)

    Liu, Qun; Dong, Zhishan; Wang, En

    2018-03-23

    Revealing the underlying similarity of various complex networks has become both a popular and interdisciplinary topic, with a plethora of relevant application domains. The essence of the similarity here is that network features of the same network type are highly similar, while the features of different kinds of networks present low similarity. In this paper, we introduce and explore a new method for comparing various complex networks based on the cut distance. We show correspondence between the cut distance and the similarity of two networks. This correspondence allows us to consider a broad range of complex networks and explicitly compare various networks with high accuracy. Various machine learning technologies such as genetic algorithms, nearest neighbor classification, and model selection are employed during the comparison process. Our cut method is shown to be suited for comparisons of undirected networks and directed networks, as well as weighted networks. In the model selection process, the results demonstrate that our approach outperforms other state-of-the-art methods with respect to accuracy.

  14. An Evaluation Model To Select an Integrated Learning System in a Large, Suburban School District.

    Science.gov (United States)

    Curlette, William L.; And Others

    The systematic evaluation process used in Georgia's DeKalb County School System to purchase comprehensive instructional software--an integrated learning system (ILS)--is described, and the decision-making model for selection is presented. Selection and implementation of an ILS were part of an instructional technology plan for the DeKalb schools…

  15. Orientation selection process during the early stage of cubic dendrite growth: A phase-field crystal study

    International Nuclear Information System (INIS)

    Tang Sai; Wang Zhijun; Guo Yaolin; Wang Jincheng; Yu Yanmei; Zhou Yaohe

    2012-01-01

    Using the phase-field crystal model, we investigate the orientation selection of the cubic dendrite growth at the atomic scale. Our simulation results reproduce how a face-centered cubic (fcc) octahedral nucleus and a body-centered cubic (bcc) truncated-rhombic dodecahedral nucleus choose the preferred growth direction and then evolve into the dendrite pattern. The interface energy anisotropy inherent in the fcc crystal structure leads to the fastest growth velocity in the 〈1 0 0〉 directions. New { 1 1 1} atomic layers prefer to nucleate at positions near the tips of the fcc octahedron, which leads to the directed growth of the fcc dendrite tips in the 〈1 0 0〉 directions. A similar orientation selection process is also found during the early stage of bcc dendrite growth. The orientation selection regime obtained by phase-field crystal simulation is helpful for understanding the orientation selection processes of real dendrite growth.

  16. A computational model of selection by consequences.

    OpenAIRE

    McDowell, J J

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied o...

  17. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    Directory of Open Access Journals (Sweden)

    Mark N Read

    2016-09-01

    Full Text Available The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto

  18. MATHEMATICAL МODELLING OF SELECTING INFORMATIVE FEATURES FOR ANALYZING THE LIFE CYCLE PROCESSES OF RADIO-ELECTRONIC MEANS

    Directory of Open Access Journals (Sweden)

    Николай Григорьевич Стародубцев

    2017-09-01

    Full Text Available The subject of the study are methods and models for extracting information about the processes of the life cycle of radio electronic means at the design, production and operation stages. The goal is to develop the fundamentals of the theory of holistic monitoring of the life cycle of radio electronic means at the stages of their design, production and operation, in particular the development of information models for monitoring life cycle indicators in the production of radio electronic means. The attainment of this goal is achieved by solving such problems: research and development of a methodology for solving the problems of selecting informative features characterizing the state of the life cycle of radio electronic means; choice of informative features characterizing the state of the life cycle processes of radio electronic means; identification of the state of the life cycle processes of radio electronic means. To solve these problems, general scientific methods were used: the main provisions of functional analysis, nonequilibrium thermodynamics, estimation and prediction of random processes, optimization methods, pattern recognition. The following results are obtained. Methods for solving the problems of selecting informative features for monitoring the life cycle of radioelectronic facilities are developed by classifying the states of radioelectronic means and the processes of LC in the space of characteristics, each of which has a certain significance, which allowed finding a complex criterion and formalizing the selection procedures. When the number of a priori data is insufficient for a correct classification, heuristic methods of selection according to the criteria for using basic prototypes and information priorities are proposed. Conclusions. The solution of the problem of mathematical modeling of the efficiency functions of the processes of the life cycle of radioelectronic facilities and the choice of informative features for

  19. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  20. The Orexin Component of Fasting Triggers Memory Processes Underlying Conditioned Food Selection in the Rat

    Science.gov (United States)

    Ferry, Barbara; Duchamp-Viret, Patricia

    2014-01-01

    To test the selectivity of the orexin A (OXA) system in olfactory sensitivity, the present study compared the effects of fasting and of central infusion of OXA on the memory processes underlying odor-malaise association during the conditioned odor aversion (COA) paradigm. Animals implanted with a cannula in the left ventricle received ICV infusion…

  1. Application of Bayesian methods to habitat selection modeling of the northern spotted owl in California: new statistical methods for wildlife research

    Science.gov (United States)

    Howard B. Stauffer; Cynthia J. Zabel; Jeffrey R. Dunk

    2005-01-01

    We compared a set of competing logistic regression habitat selection models for Northern Spotted Owls (Strix occidentalis caurina) in California. The habitat selection models were estimated, compared, evaluated, and tested using multiple sample datasets collected on federal forestlands in northern California. We used Bayesian methods in interpreting...

  2. Model for assessing the success of SMEs in the internacionali­zation process

    Directory of Open Access Journals (Sweden)

    Lea Kubíčková

    2010-01-01

    Full Text Available The paper deals with evaluating the success of small and medium-sized companies in in­ter­na­tio­na­li­za­tion process. The process of internationalization is defined in the literature in a many ways; there is a countless variety of different approaches and models of internationalization process of firms. Like all processes in the firm also the internationalization process is accompanied by risks. For risk management it is important to know what the key factors of success are in the international arena. In this article is presented a simple evaluation model that could be used by SMEs to determine not only how strong are they compared to competitors, but also at what level are their key success factors in the process of internationalization. The aim was to find a simple method to help small and medium enterprises to assess their situation in the field of internationalization and to help them identify their strengths and weaknesses in this area. Proposed simple evaluation model has the graphic output from which it can be seen in which areas the company is doing well in internationalization process and in what areas is doing badly – then there is room for further improvement. Creating the model it was essential to divide the various factors into several groups and further evaluation to determine the range by which SMEs can quantify the level of success in internationalization process. Before the model was constructed it was necessary to collect data among small and mid-sized firms, and to process the outputs of the survey. After confirmation or to rejection of the certain hypotheses key success factors of SMEs in the internationalization process were selected and these factors were then aggregated into 4 groups. The model was then applied to data obtained from a survey of 40 SMEs and in the paper there are presented specific examples of graphical output of the model for the best and worst rated company. Authors are aware that the model is

  3. 3D physical modeling for patterning process development

    Science.gov (United States)

    Sarma, Chandra; Abdo, Amr; Bailey, Todd; Conley, Will; Dunn, Derren; Marokkey, Sajan; Talbi, Mohamed

    2010-03-01

    In this paper we will demonstrate how a 3D physical patterning model can act as a forensic tool for OPC and ground-rule development. We discuss examples where the 2D modeling shows no issues in printing gate lines but 3D modeling shows severe resist loss in the middle. In absence of corrective measure, there is a high likelihood of line discontinuity post etch. Such early insight into process limitations of prospective ground rules can be invaluable for early technology development. We will also demonstrate how the root cause of broken poly-line after etch could be traced to resist necking in the region of STI step with the help of 3D models. We discuss different cases of metal and contact layouts where 3D modeling gives an early insight in to technology limitations. In addition such a 3D physical model could be used for early resist evaluation and selection for required ground-rule challenges, which can substantially reduce the cycle time for process development.

  4. A robust multi-objective global supplier selection model under currency fluctuation and price discount

    Science.gov (United States)

    Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman

    2017-06-01

    Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.

  5. Handling equipment Selection in open pit mines by using an integrated model based on group decision making

    Directory of Open Access Journals (Sweden)

    Abdolreza Yazdani-Chamzini

    2012-10-01

    Full Text Available Process of handling equipment selection is one of the most important and basic parts in the project planning, particularly mining projects due to holding a high charge of the total project's cost. Different criteria impact on the handling equipment selection, while these criteria often are in conflicting with each other. Therefore, the process of handling equipment selection is a complex and multi criteria decision making problem. There are a variety of methods for selecting the most appropriate equipment among a set of alternatives. Likewise, according to the sophisticated structure of the problem, imprecise data, less of information, and inherent uncertainty, the usage of the fuzzy sets can be useful. In this study a new integrated model based on fuzzy analytic hierarchy process (FAHP and fuzzy technique for order preference by similarity to ideal solution (FTOPSIS is proposed, which uses group decision making to reduce individual errors. In order to calculate the weights of the evaluation criteria, FAHP is utilized in the process of handling equipment selection, and then these weights are inserted to the FTOPSIS computations to select the most appropriate handling system among a pool of alternatives. The results of this study demonstrate the potential application and effectiveness of the proposed model, which can be applied to different types of sophisticated problems in real problems.

  6. Training Self-Regulated Learning Skills with Video Modeling Examples: Do Task-Selection Skills Transfer?

    Science.gov (United States)

    Raaijmakers, Steven F.; Baars, Martine; Schaap, Lydia; Paas, Fred; van Merriënboer, Jeroen; van Gog, Tamara

    2018-01-01

    Self-assessment and task-selection skills are crucial in self-regulated learning situations in which students can choose their own tasks. Prior research suggested that training with video modeling examples, in which another person (the model) demonstrates and explains the cyclical process of problem-solving task performance, self-assessment, and…

  7. A generalized logarithmic image processing model based on the gigavision sensor model.

    Science.gov (United States)

    Deng, Guang

    2012-03-01

    The logarithmic image processing (LIP) model is a mathematical theory providing generalized linear operations for image processing. The gigavision sensor (GVS) is a new imaging device that can be described by a statistical model. In this paper, by studying these two seemingly unrelated models, we develop a generalized LIP (GLIP) model. With the LIP model being its special case, the GLIP model not only provides new insights into the LIP model but also defines new image representations and operations for solving general image processing problems that are not necessarily related to the GVS. A new parametric LIP model is also developed. To illustrate the application of the new scalar multiplication operation, we propose an energy-preserving algorithm for tone mapping, which is a necessary step in image dehazing. By comparing with results using two state-of-the-art algorithms, we show that the new scalar multiplication operation is an effective tool for tone mapping.

  8. Seeking inclusion in an exclusive process: discourses of medical school student selection.

    Science.gov (United States)

    Razack, Saleem; Hodges, Brian; Steinert, Yvonne; Maguire, Mary

    2015-01-01

    Calls to increase medical class representativeness to better reflect the diversity of society represent a growing international trend. There is an inherent tension between these calls and competitive student selection processes driven by academic achievement. How is this tension manifested? Our three-phase interdisciplinary research programme focused on the discourses of excellence, equity and diversity in the medical school selection process, as conveyed by key stakeholders: (i) institutions and regulatory bodies (the websites of 17 medical schools and 15 policy documents from national regulatory bodies); (ii) admissions committee members (ACMs) (according to semi-structured interviews [n = 9]), and (iii) successful applicants (according to semi-structured interviews [n = 14]). The work is theoretically situated within the works of Foucault, Bourdieu and Bakhtin. The conceptual framework is supplemented by critical hermeneutics and the performance theories of Goffman. Academic excellence discourses consistently predominate over discourses calling for greater representativeness in medical classes. Policy addressing demographic representativeness in medicine may unwittingly contribute to the reproduction of historical patterns of exclusion of under-represented groups. In ACM selection practices, another discursive tension is exposed as the inherent privilege in the process is marked, challenging the ideal of medicine as a meritocracy. Applicants' representations of self in the 'performance' of interviewing demonstrate implicit recognition of the power inherent in the act of selection and are manifested in the use of explicit strategies to 'fit in'. How can this critical discourse analysis inform improved inclusiveness in student selection? Policymakers addressing diversity and equity issues in medical school admissions should explicitly recognise the power dynamics at play between the profession and marginalised groups. For greater inclusion and to avoid one

  9. Advanced modeling of management processes in information technology

    CERN Document Server

    Kowalczuk, Zdzislaw

    2014-01-01

    This book deals with the issues of modelling management processes of information technology and IT projects while its core is the model of information technology management and its component models (contextual, local) describing initial processing and the maturity capsule as well as a decision-making system represented by a multi-level sequential model of IT technology selection, which acquires a fuzzy rule-based implementation in this work. In terms of applicability, this work may also be useful for diagnosing applicability of IT standards in evaluation of IT organizations. The results of this diagnosis might prove valid for those preparing new standards so that – apart from their own visions – they could, to an even greater extent, take into account the capabilities and needs of the leaders of project and manufacturing teams. The book is intended for IT professionals using the ITIL, COBIT and TOGAF standards in their work. Students of computer science and management who are interested in the issue of IT...

  10. A Dual-Stage Two-Phase Model of Selective Attention

    Science.gov (United States)

    Hubner, Ronald; Steinhauser, Marco; Lehle, Carola

    2010-01-01

    The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…

  11. River water quality model no. 1 (RWQM1): II. Biochemical process equations

    DEFF Research Database (Denmark)

    Reichert, P.; Borchardt, D.; Henze, Mogens

    2001-01-01

    In this paper, biochemical process equations are presented as a basis for water quality modelling in rivers under aerobic and anoxic conditions. These equations are not new, but they summarise parts of the development over the past 75 years. The primary goals of the presentation are to stimulate...... transformation processes. This paper is part of a series of three papers. In the first paper, the general modelling approach is described; in the present paper, the biochemical process equations of a complex model are presented; and in the third paper, recommendations are given for the selection of a reasonable...

  12. Development of multivariate NTCP models for radiation-induced hypothyroidism: a comparative analysis

    International Nuclear Information System (INIS)

    Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D’Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto

    2012-01-01

    Hypothyroidism is a frequent late side effect of radiation therapy of the cervical region. Purpose of this work is to develop multivariate normal tissue complication probability (NTCP) models for radiation-induced hypothyroidism (RHT) and to compare them with already existing NTCP models for RHT. Fifty-three patients treated with sequential chemo-radiotherapy for Hodgkin’s lymphoma (HL) were retrospectively reviewed for RHT events. Clinical information along with thyroid gland dose distribution parameters were collected and their correlation to RHT was analyzed by Spearman’s rank correlation coefficient (Rs). Multivariate logistic regression method using resampling methods (bootstrapping) was applied to select model order and parameters for NTCP modeling. Model performance was evaluated through the area under the receiver operating characteristic curve (AUC). Models were tested against external published data on RHT and compared with other published NTCP models. If we express the thyroid volume exceeding X Gy as a percentage (V x (%)), a two-variable NTCP model including V 30 (%) and gender resulted to be the optimal predictive model for RHT (Rs = 0.615, p < 0.001. AUC = 0.87). Conversely, if absolute thyroid volume exceeding X Gy (V x (cc)) was analyzed, an NTCP model based on 3 variables including V 30 (cc), thyroid gland volume and gender was selected as the most predictive model (Rs = 0.630, p < 0.001. AUC = 0.85). The three-variable model performs better when tested on an external cohort characterized by large inter-individuals variation in thyroid volumes (AUC = 0.914, 95% CI 0.760–0.984). A comparable performance was found between our model and that proposed in the literature based on thyroid gland mean dose and volume (p = 0.264). The absolute volume of thyroid gland exceeding 30 Gy in combination with thyroid gland volume and gender provide an NTCP model for RHT with improved prediction capability not only within our patient population but also in an

  13. Evaluating the influence of motor control on selective attention through a stochastic model: the paradigm of motor control dysfunction in cerebellar patient.

    Science.gov (United States)

    Veneri, Giacomo; Federico, Antonio; Rufa, Alessandra

    2014-01-01

    Attention allows us to selectively process the vast amount of information with which we are confronted, prioritizing some aspects of information and ignoring others by focusing on a certain location or aspect of the visual scene. Selective attention is guided by two cognitive mechanisms: saliency of the image (bottom up) and endogenous mechanisms (top down). These two mechanisms interact to direct attention and plan eye movements; then, the movement profile is sent to the motor system, which must constantly update the command needed to produce the desired eye movement. A new approach is described here to study how the eye motor control could influence this selection mechanism in clinical behavior: two groups of patients (SCA2 and late onset cerebellar ataxia LOCA) with well-known problems of motor control were studied; patients performed a cognitively demanding task; the results were compared to a stochastic model based on Monte Carlo simulations and a group of healthy subjects. The analytical procedure evaluated some energy functions for understanding the process. The implemented model suggested that patients performed an optimal visual search, reducing intrinsic noise sources. Our findings theorize a strict correlation between the "optimal motor system" and the "optimal stimulus encoders."

  14. Modelling of innovative SANEX process mal-operations

    International Nuclear Information System (INIS)

    McLachlan, F.; Taylor, R.; Whittaker, D.; Woodhead, D.; Geist, A.

    2016-01-01

    The innovative (i-) SANEX process for the separation of minor actinides from PUREX highly active raffinate is expected to employ a solvent phase comprising 0.2 M TODGA with 5 v/v% 1-octanol in an inert diluent. An initial extract / scrub section would be used to extract trivalent actinides and lanthanides from the feed whilst leaving other fission products in the aqueous phase, before the loaded solvent is contacted with a low acidity aqueous phase containing a sulphonated bis-triazinyl pyridine ligand (BTP) to effect a selective strip of the actinides, so yielding separate actinide (An) and lanthanide (Ln) product streams. This process has been demonstrated in lab scale trials at Juelich (FZJ). The SACSESS (Safety of Actinide Separation processes) project is focused on the evaluation and improvement of the safety of such future systems. A key element of this is the development of an understanding of the response of a process to upsets (mal-operations). It is only practical to study a small subset of possible mal-operations experimentally and consideration of the majority of mal-operations entails the use of a validated dynamic model of the process. Distribution algorithms for HNO_3, Am, Cm and the lanthanides have been developed and incorporated into a dynamic flowsheet model that has, so far, been configured to correspond to the extract-scrub section of the i-SANEX flowsheet trial undertaken at FZJ in 2013. Comparison is made between the steady state model results and experimental results. Results from modelling of low acidity and high temperature mal-operations are presented. (authors)

  15. Using Card Games to Simulate the Process of Natural Selection

    Science.gov (United States)

    Grilliot, Matthew E.; Harden, Siegfried

    2014-01-01

    In 1858, Darwin published "On the Origin of Species by Means of Natural Selection." His explanation of evolution by natural selection has become the unifying theme of biology. We have found that many students do not fully comprehend the process of evolution by natural selection. We discuss a few simple games that incorporate hands-on…

  16. A framework for testing and comparing binaural models.

    Science.gov (United States)

    Dietz, Mathias; Lestang, Jean-Hugues; Majdak, Piotr; Stern, Richard M; Marquardt, Torsten; Ewert, Stephan D; Hartmann, William M; Goodman, Dan F M

    2018-03-01

    Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Elementary Teachers' Selection and Use of Visual Models

    Science.gov (United States)

    Lee, Tammy D.; Gail Jones, M.

    2018-02-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  18. Procedure for the Selection and Validation of a Calibration Model I-Description and Application.

    Science.gov (United States)

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2017-05-01

    Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Comparative analysis of response to selection with three insecticides in the dengue mosquito Aedes aegypti using mRNA sequencing.

    Science.gov (United States)

    David, Jean-Philippe; Faucon, Frédéric; Chandor-Proust, Alexia; Poupardin, Rodolphe; Riaz, Muhammad Asam; Bonin, Aurélie; Navratil, Vincent; Reynaud, Stéphane

    2014-03-05

    Mosquito control programmes using chemical insecticides are increasingly threatened by the development of resistance. Such resistance can be the consequence of changes in proteins targeted by insecticides (target site mediated resistance), increased insecticide biodegradation (metabolic resistance), altered transport, sequestration or other mechanisms. As opposed to target site resistance, other mechanisms are far from being fully understood. Indeed, insecticide selection often affects a large number of genes and various biological processes can hypothetically confer resistance. In this context, the aim of the present study was to use RNA sequencing (RNA-seq) for comparing transcription level and polymorphism variations associated with adaptation to chemical insecticides in the mosquito Aedes aegypti. Biological materials consisted of a parental susceptible strain together with three child strains selected across multiple generations with three insecticides from different classes: the pyrethroid permethrin, the neonicotinoid imidacloprid and the carbamate propoxur. After ten generations, insecticide-selected strains showed elevated resistance levels to the insecticides used for selection. RNA-seq data allowed detecting over 13,000 transcripts, of which 413 were differentially transcribed in insecticide-selected strains as compared to the susceptible strain. Among them, a significant enrichment of transcripts encoding cuticle proteins, transporters and enzymes was observed. Polymorphism analysis revealed over 2500 SNPs showing > 50% allele frequency variations in insecticide-selected strains as compared to the susceptible strain, affecting over 1000 transcripts. Comparing gene transcription and polymorphism patterns revealed marked differences among strains. While imidacloprid selection was linked to the over transcription of many genes, permethrin selection was rather linked to polymorphism variations. Focusing on detoxification enzymes revealed that permethrin

  20. Structured spatio-temporal shot-noise Cox point process models, with a view to modelling forest fires

    DEFF Research Database (Denmark)

    Møller, Jesper; Diaz-Avalos, Carlos

    Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... dataset consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....

  1. Structured Spatio-temporal shot-noise Cox point process models, with a view to modelling forest fires

    DEFF Research Database (Denmark)

    Møller, Jesper; Diaz-Avalos, Carlos

    2010-01-01

    Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... data set consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....

  2. Comparing the Selection and Placement of Best Management Practices in Improving Water Quality Using a Multiobjective Optimization and Targeting Method

    Directory of Open Access Journals (Sweden)

    Li-Chi Chiang

    2014-03-01

    Full Text Available Suites of Best Management Practices (BMPs are usually selected to be economically and environmentally efficient in reducing nonpoint source (NPS pollutants from agricultural areas in a watershed. The objective of this research was to compare the selection and placement of BMPs in a pasture-dominated watershed using multiobjective optimization and targeting methods. Two objective functions were used in the optimization process, which minimize pollutant losses and the BMP placement areas. The optimization tool was an integration of a multi-objective genetic algorithm (GA and a watershed model (Soil and Water Assessment Tool—SWAT. For the targeting method, an optimum BMP option was implemented in critical areas in the watershed that contribute the greatest pollutant losses. A total of 171 BMP combinations, which consist of grazing management, vegetated filter strips (VFS, and poultry litter applications were considered. The results showed that the optimization is less effective when vegetated filter strips (VFS are not considered, and it requires much longer computation times than the targeting method to search for optimum BMPs. Although the targeting method is effective in selecting and placing an optimum BMP, larger areas are needed for BMP implementation to achieve the same pollutant reductions as the optimization method.

  3. Concepts of radiation processes selection for industrial realization. Chapter 6

    International Nuclear Information System (INIS)

    1997-01-01

    For selection of radiation processes in industry the processes usually are analyzing by technological and social effects, power-insensitivity, common efficiency. Technological effect is generally conditioned with uniqueness of radiation technologies which allow to obtain new material or certain one but with new properties. Social effect first of all concerns with influence of radiation technologies on consumer's psychology. Implementation of equipment for radiation technological process for both the new material production and natural materials radiation treatment is related with decision of three tasks: 1) Choice of radiation source; 2). Creation of special equipment for radiation and untraditional stages of the process; 3) Selection of radiation and other conditions ensuring of achievement of optimal technological and economical indexes

  4. PARAMETER ESTIMATION AND MODEL SELECTION FOR INDOOR ENVIRONMENTS BASED ON SPARSE OBSERVATIONS

    Directory of Open Access Journals (Sweden)

    Y. Dehbi

    2017-09-01

    Full Text Available This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  5. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    Science.gov (United States)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  6. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  7. Modelling Technical and Economic Parameters in Selection of Manufacturing Devices

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2017-11-01

    Full Text Available Sustainable science and technology development is also conditioned by continuous development of means of production which have a key role in structure of each production system. Mechanical nature of the means of production is complemented by controlling and electronic devices in context of intelligent industry. A selection of production machines for a technological process or technological project has so far been practically resolved, often only intuitively. With regard to increasing intelligence, the number of variable parameters that have to be considered when choosing a production device is also increasing. It is necessary to use computing techniques and decision making methods according to heuristic methods and more precise methodological procedures during the selection. The authors present an innovative model for optimization of technical and economic parameters in the selection of manufacturing devices for industry 4.0.

  8. Selecting an interprofessional education model for a tertiary health care setting.

    Science.gov (United States)

    Menard, Prudy; Varpio, Lara

    2014-07-01

    The World Health Organization describes interprofessional education (IPE) and collaboration as necessary components of all health professionals' education - in curriculum and in practice. However, no standard framework exists to guide healthcare settings in developing or selecting an IPE model that meets the learning needs of licensed practitioners in practice and that suits the unique needs of their setting. Initially, a broad review of the grey literature (organizational websites, government documents and published books) and healthcare databases was undertaken for existing IPE models. Subsequently, database searches of published papers using Scopus, Scholars Portal and Medline was undertaken. Through this search process five IPE models were identified in the literature. This paper attempts to: briefly outline the five different models of IPE that are presently offered in the literature; and illustrate how a healthcare setting can select the IPE model within their context using Reeves' seven key trends in developing IPE. In presenting these results, the paper contributes to the interprofessional literature by offering an overview of possible IPE models that can be used to inform the implementation or modification of interprofessional practices in a tertiary healthcare setting.

  9. Which family physician should I choose? The analytic hierarchy process approach for ranking of criteria in the selection of a family physician.

    Science.gov (United States)

    Kuruoglu, Emel; Guldal, Dilek; Mevsim, Vildan; Gunvar, Tolga

    2015-08-05

    Choosing the most appropriate family physician (FP) for the individual, plays a fundamental role in primary care. The aim of this study is to determine the selection criteria for the patients in choosing their family doctors and priority ranking of these criteria by using the multi-criteria decision-making method of the Analytic Hierarchy Process (AHP) model. The study was planned and conducted in two phases. In the first phase, factors affecting the patients' decisions were revealed with a qualitative research. In the next phase, the priorities of FP selection criteria were determined by using AHP model. Criteria were compared in pairs. 96 patient were asked to fill the information forms which contains comparison scores in the Family Health Centres. According to the analysis of focus group discussions FP selection criteria were congregated in to five groups: Individual Characteristics, Patient-Doctor relationship, Professional characteristics, the Setting, and Ethical Characteristics. For each of the 96 participants, comparison matrixes were formed based on the scores of their information forms. Of these, models of only 5 (5.2 %) of the participants were consistent, in other words, they have been able to score consistent ranking. The consistency ratios (CR) were found to be smaller than 0.10. Therefore the comparison matrix of this new model, which was formed based on the medians of scores only given by these 5 participants, was consistent (CR = 0.06 < 0.10). According to comparison results; with a 0.467 value-weight, the most important criterion for choosing a family physician is his/her 'Professional characteristics'. Selection criteria for choosing a FP were put in a priority order by using AHP model. These criteria can be used as measures for selecting alternative FPs in further researches.

  10. Investigation of the site selection examples adopted local participation. The site selection processes in Belgium, UK and Switzerland

    International Nuclear Information System (INIS)

    Kageyama, Hitoshi; Suzuki, Shinji; Hirose, Ikuro; Yoshioka, Tatsuji

    2014-06-01

    In late years, local participation policies are being adopted in foreign countries at site selection for the disposal of the radioactive waste. We performed documents investigation about the examples of the site selection processes of Belgium, the U.K., and Switzerland to establish the site selection policy in Japan. In Belgium, after the failure of the site selection for the disposal of short-lived low and intermediate level radioactive waste (LILW) in an early stage, the idea of the local partnership (LP) was developed and three independent LPs were established between the implementing body and each municipality. About 7 years later, one site was decided as the disposal site in the cabinet meeting of the federal government. In the U.K., after the failure of the site selection for the rock characterization facility, the government policy was changed and the consultation process comprised of six phases was started. Though the process had been carried out for over 4 years since one combined partnership was established between the implementing body and the municipalities involved, they had to withdraw from the consulting process because a county council had not accepted that the process would step forward to the 4th phase. In Switzerland, the implementing body selected one site for LILW disposal at an early stage, but the project was denied by the referendum in the Canton having jurisdiction over the site area. After that the Federal Parliament established new Nuclear Energy Act and Nuclear Energy Ordinance precluding the veto of Canton. Now the site selection project is being carried out according to the process comprised of three phases with local participation policy. Reviewing the merits and demerits of each example through this investigation, we confirmed if we are to adopt local participation policy in our country in future, further prudent study would be necessary, considering current and future social conditions in Japan. (author)

  11. Green Pea and Garlic Puree Model Food Development for Thermal Pasteurization Process Quality Evaluation.

    Science.gov (United States)

    Bornhorst, Ellen R; Tang, Juming; Sablani, Shyam S; Barbosa-Cánovas, Gustavo V; Liu, Fang

    2017-07-01

    Development and selection of model foods is a critical part of microwave thermal process development, simulation validation, and optimization. Previously developed model foods for pasteurization process evaluation utilized Maillard reaction products as the time-temperature integrators, which resulted in similar temperature sensitivity among the models. The aim of this research was to develop additional model foods based on different time-temperature integrators, determine their dielectric properties and color change kinetics, and validate the optimal model food in hot water and microwave-assisted pasteurization processes. Color, quantified using a * value, was selected as the time-temperature indicator for green pea and garlic puree model foods. Results showed 915 MHz microwaves had a greater penetration depth into the green pea model food than the garlic. a * value reaction rates for the green pea model were approximately 4 times slower than in the garlic model food; slower reaction rates were preferred for the application of model food in this study, that is quality evaluation for a target process of 90 °C for 10 min at the cold spot. Pasteurization validation used the green pea model food and results showed that there were quantifiable differences between the color of the unheated control, hot water pasteurization, and microwave-assisted thermal pasteurization system. Both model foods developed in this research could be utilized for quality assessment and optimization of various thermal pasteurization processes. © 2017 Institute of Food Technologists®.

  12. STUDY CONCERNING THE ELABORATION OF CERTAIN ORIENTATION MODELS AND THE INITIAL SELECTION FOR SPEED SKATING

    Directory of Open Access Journals (Sweden)

    Vaida Marius

    2009-12-01

    Full Text Available In realizing this study I started from the premise that, by elaborating certain orientation models and initial selection for the speed skating and their application will appear superior results, necessary results, taking into account the actual evolution of the high performance sport in general and of the speed skating, in special.The target of this study has been the identification of an orientation model and a complete initial selection that should be based on the favorable aptitudes of the speed skating. On the basis of the made researched orientation models and initial selection has been made, things that have been demonstrated experimental that are not viable, the study starting from the data of the 120 copies, the complete experiment being made by 32 subjects separated in two groups, one using the proposed model and the other formed fromsubjects randomly selected.These models can serve as common working instruments both for the orientation process and for the initial selection one, being able to integrate in the proper practical activity, these being used easily both by coaches that are in charge with the proper selection of the athletes but also by the physical education teachers orschool teachers that are in contact with children of an early age.

  13. Comparing spatial diversification and meta-population models in the Indo-Australian Archipelago.

    Science.gov (United States)

    Chalmandrier, Loïc; Albouy, Camille; Descombes, Patrice; Sandel, Brody; Faurby, Soren; Svenning, Jens-Christian; Zimmermann, Niklaus E; Pellissier, Loïc

    2018-03-01

    Reconstructing the processes that have shaped the emergence of biodiversity gradients is critical to understand the dynamics of diversification of life on Earth. Islands have traditionally been used as model systems to unravel the processes shaping biological diversity. MacArthur and Wilson's island biogeographic model predicts diversity to be based on dynamic interactions between colonization and extinction rates, while treating islands themselves as geologically static entities. The current spatial configuration of islands should influence meta-population dynamics, but long-term geological changes within archipelagos are also expected to have shaped island biodiversity, in part by driving diversification. Here, we compare two mechanistic models providing inferences on species richness at a biogeographic scale: a mechanistic spatial-temporal model of species diversification and a spatial meta-population model. While the meta-population model operates over a static landscape, the diversification model is driven by changes in the size and spatial configuration of islands through time. We compare the inferences of both models to floristic diversity patterns among land patches of the Indo-Australian Archipelago. Simulation results from the diversification model better matched observed diversity than a meta-population model constrained only by the contemporary landscape. The diversification model suggests that the dynamic re-positioning of islands promoting land disconnection and reconnection induced an accumulation of particularly high species diversity on Borneo, which is central within the island network. By contrast, the meta-population model predicts a higher diversity on the mainlands, which is less compatible with empirical data. Our analyses highlight that, by comparing models with contrasting assumptions, we can pinpoint the processes that are most compatible with extant biodiversity patterns.

  14. Comparing spatial diversification and meta-population models in the Indo-Australian Archipelago

    Science.gov (United States)

    Chalmandrier, Loïc; Albouy, Camille; Descombes, Patrice; Sandel, Brody; Faurby, Soren; Svenning, Jens-Christian; Zimmermann, Niklaus E.

    2018-01-01

    Reconstructing the processes that have shaped the emergence of biodiversity gradients is critical to understand the dynamics of diversification of life on Earth. Islands have traditionally been used as model systems to unravel the processes shaping biological diversity. MacArthur and Wilson's island biogeographic model predicts diversity to be based on dynamic interactions between colonization and extinction rates, while treating islands themselves as geologically static entities. The current spatial configuration of islands should influence meta-population dynamics, but long-term geological changes within archipelagos are also expected to have shaped island biodiversity, in part by driving diversification. Here, we compare two mechanistic models providing inferences on species richness at a biogeographic scale: a mechanistic spatial-temporal model of species diversification and a spatial meta-population model. While the meta-population model operates over a static landscape, the diversification model is driven by changes in the size and spatial configuration of islands through time. We compare the inferences of both models to floristic diversity patterns among land patches of the Indo-Australian Archipelago. Simulation results from the diversification model better matched observed diversity than a meta-population model constrained only by the contemporary landscape. The diversification model suggests that the dynamic re-positioning of islands promoting land disconnection and reconnection induced an accumulation of particularly high species diversity on Borneo, which is central within the island network. By contrast, the meta-population model predicts a higher diversity on the mainlands, which is less compatible with empirical data. Our analyses highlight that, by comparing models with contrasting assumptions, we can pinpoint the processes that are most compatible with extant biodiversity patterns. PMID:29657753

  15. Robustness of movement models: can models bridge the gap between temporal scales of data sets and behavioural processes?

    Science.gov (United States)

    Schlägel, Ulrike E; Lewis, Mark A

    2016-12-01

    Discrete-time random walks and their extensions are common tools for analyzing animal movement data. In these analyses, resolution of temporal discretization is a critical feature. Ideally, a model both mirrors the relevant temporal scale of the biological process of interest and matches the data sampling rate. Challenges arise when resolution of data is too coarse due to technological constraints, or when we wish to extrapolate results or compare results obtained from data with different resolutions. Drawing loosely on the concept of robustness in statistics, we propose a rigorous mathematical framework for studying movement models' robustness against changes in temporal resolution. In this framework, we define varying levels of robustness as formal model properties, focusing on random walk models with spatially-explicit component. With the new framework, we can investigate whether models can validly be applied to data across varying temporal resolutions and how we can account for these different resolutions in statistical inference results. We apply the new framework to movement-based resource selection models, demonstrating both analytical and numerical calculations, as well as a Monte Carlo simulation approach. While exact robustness is rare, the concept of approximate robustness provides a promising new direction for analyzing movement models.

  16. Material model validation for laser shock peening process simulation

    International Nuclear Information System (INIS)

    Amarchinta, H K; Grandhi, R V; Langer, K; Stargel, D S

    2009-01-01

    Advanced mechanical surface enhancement techniques have been used successfully to increase the fatigue life of metallic components. These techniques impart deep compressive residual stresses into the component to counter potentially damage-inducing tensile stresses generated under service loading. Laser shock peening (LSP) is an advanced mechanical surface enhancement technique used predominantly in the aircraft industry. To reduce costs and make the technique available on a large-scale basis for industrial applications, simulation of the LSP process is required. Accurate simulation of the LSP process is a challenging task, because the process has many parameters such as laser spot size, pressure profile and material model that must be precisely determined. This work focuses on investigating the appropriate material model that could be used in simulation and design. In the LSP process material is subjected to strain rates of 10 6  s −1 , which is very high compared with conventional strain rates. The importance of an accurate material model increases because the material behaves significantly different at such high strain rates. This work investigates the effect of multiple nonlinear material models for representing the elastic–plastic behavior of materials. Elastic perfectly plastic, Johnson–Cook and Zerilli–Armstrong models are used, and the performance of each model is compared with available experimental results

  17. An automated process for building reliable and optimal in vitro/in vivo correlation models based on Monte Carlo simulations.

    Science.gov (United States)

    Sutton, Steven C; Hu, Mingxiu

    2006-05-05

    Many mathematical models have been proposed for establishing an in vitro/in vivo correlation (IVIVC). The traditional IVIVC model building process consists of 5 steps: deconvolution, model fitting, convolution, prediction error evaluation, and cross-validation. This is a time-consuming process and typically a few models at most are tested for any given data set. The objectives of this work were to (1) propose a statistical tool to screen models for further development of an IVIVC, (2) evaluate the performance of each model under different circumstances, and (3) investigate the effectiveness of common statistical model selection criteria for choosing IVIVC models. A computer program was developed to explore which model(s) would be most likely to work well with a random variation from the original formulation. The process used Monte Carlo simulation techniques to build IVIVC models. Data-based model selection criteria (Akaike Information Criteria [AIC], R2) and the probability of passing the Food and Drug Administration "prediction error" requirement was calculated. To illustrate this approach, several real data sets representing a broad range of release profiles are used to illustrate the process and to demonstrate the advantages of this automated process over the traditional approach. The Hixson-Crowell and Weibull models were often preferred over the linear. When evaluating whether a Level A IVIVC model was possible, the model selection criteria AIC generally selected the best model. We believe that the approach we proposed may be a rapid tool to determine which IVIVC model (if any) is the most applicable.

  18. Decision Support System for Determining Scholarship Selection using an Analytical Hierarchy Process

    Science.gov (United States)

    Puspitasari, T. D.; Sari, E. O.; Destarianto, P.; Riskiawan, H. Y.

    2018-01-01

    Decision Support System is a computer program application that analyzes data and presents it so that users can make decision more easily. Determining Scholarship Selection study case in Senior High School in east Java wasn’t easy. It needed application to solve the problem, to improve the accuracy of targets for prospective beneficiaries of poor students and to speed up the screening process. This research will build system uses the method of Analytical Hierarchy Process (AHP) is a method that solves a complex and unstructured problem into its group, organizes the groups into a hierarchical order, inputs numerical values instead of human perception in comparing relative and ultimately with a synthesis determined elements that have the highest priority. The accuracy system for this research is 90%.

  19. Standard Model processes

    CERN Document Server

    Mangano, M.L.; Aguilar-Saavedra, Juan Antonio; Alekhin, S.; Badger, S.; Bauer, C.W.; Becher, T.; Bertone, V.; Bonvini, M.; Boselli, S.; Bothmann, E.; Boughezal, R.; Cacciari, M.; Carloni Calame, C.M.; Caola, F.; Campbell, J.M.; Carrazza, S.; Chiesa, M.; Cieri, L.; Cimaglia, F.; Febres Cordero, F.; Ferrarese, P.; D'Enterria, D.; Ferrera, G.; Garcia i Tormo, X.; Garzelli, M.V.; Germann, E.; Hirschi, V.; Han, T.; Ita, H.; Jäger, B.; Kallweit, S.; Karlberg, A.; Kuttimalai, S.; Krauss, F.; Larkoski, A.J.; Lindert, J.; Luisoni, G.; Maierhöfer, P.; Mattelaer, O.; Martinez, H.; Moch, S.; Montagna, G.; Moretti, M.; Nason, P.; Nicrosini, O.; Oleari, C.; Pagani, D.; Papaefstathiou, A.; Petriello, F.; Piccinini, F.; Pierini, M.; Pierog, T.; Pozzorini, S.; Re, E.; Robens, T.; Rojo, J.; Ruiz, R.; Sakurai, K.; Salam, G.P.; Salfelder, L.; Schönherr, M.; Schulze, M.; Schumann, S.; Selvaggi, M.; Shivaji, A.; Siodmok, A.; Skands, P.; Torrielli, P.; Tramontano, F.; Tsinikos, I.; Tweedie, B.; Vicini, A.; Westhoff, S.; Zaro, M.; Zeppenfeld, D.; CERN. Geneva. ATS Department

    2017-06-22

    This report summarises the properties of Standard Model processes at the 100 TeV pp collider. We document the production rates and typical distributions for a number of benchmark Standard Model processes, and discuss new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  20. A Comparative Study of Feature Selection and Classification Methods for Gene Expression Data

    KAUST Repository

    Abusamra, Heba

    2013-01-01

    Different experiments have been applied to compare the performance of the classification methods with and without performing feature selection. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in different feature selection methods is investigated and the most frequent features selected in each fold among all methods for both datasets are evaluated.

  1. Process Simulation for the Design and Scale Up of Heterogeneous Catalytic Process: Kinetic Modelling Issues

    Directory of Open Access Journals (Sweden)

    Antonio Tripodi

    2017-05-01

    Full Text Available Process simulation represents an important tool for plant design and optimization, either applied to well established or to newly developed processes. Suitable thermodynamic packages should be selected in order to properly describe the behavior of reactors and unit operations and to precisely define phase equilibria. Moreover, a detailed and representative kinetic scheme should be available to predict correctly the dependence of the process on its main variables. This review points out some models and methods for kinetic analysis specifically applied to the simulation of catalytic processes, as a basis for process design and optimization. Attention is paid also to microkinetic modelling and to the methods based on first principles, to elucidate mechanisms and independently calculate thermodynamic and kinetic parameters. Different case studies support the discussion. At first, we have selected two basic examples from the industrial chemistry practice, e.g., ammonia and methanol synthesis, which may be described through a relatively simple reaction pathway and the relative available kinetic scheme. Then, a more complex reaction network is deeply discussed to define the conversion of bioethanol into syngas/hydrogen or into building blocks, such as ethylene. In this case, lumped kinetic schemes completely fail the description of process behavior. Thus, in this case, more detailed—e.g., microkinetic—schemes should be available to implement into the simulator. However, the correct definition of all the kinetic data when complex microkinetic mechanisms are used, often leads to unreliable, highly correlated parameters. In such cases, greater effort to independently estimate some relevant kinetic/thermodynamic data through Density Functional Theory (DFT/ab initio methods may be helpful to improve process description.

  2. Optimization of a micro-scale, high throughput process development tool and the demonstration of comparable process performance and product quality with biopharmaceutical manufacturing processes.

    Science.gov (United States)

    Evans, Steven T; Stewart, Kevin D; Afdahl, Chris; Patel, Rohan; Newell, Kelcy J

    2017-07-14

    In this paper, we discuss the optimization and implementation of a high throughput process development (HTPD) tool that utilizes commercially available micro-liter sized column technology for the purification of multiple clinically significant monoclonal antibodies. Chromatographic profiles generated using this optimized tool are shown to overlay with comparable profiles from the conventional bench-scale and clinical manufacturing scale. Further, all product quality attributes measured are comparable across scales for the mAb purifications. In addition to supporting chromatography process development efforts (e.g., optimization screening), comparable product quality results at all scales makes this tool is an appropriate scale model to enable purification and product quality comparisons of HTPD bioreactors conditions. The ability to perform up to 8 chromatography purifications in parallel with reduced material requirements per run creates opportunities for gathering more process knowledge in less time. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Modeling of sorption processes on solid-phase ion-exchangers

    Science.gov (United States)

    Dorofeeva, Ludmila; Kuan, Nguyen Anh

    2018-03-01

    Research of alkaline elements separation on solid-phase ion-exchangers is carried out to define the selectivity coefficients and height of an equivalent theoretical stage for both continuous and stepwise filling of column by ionite. On inorganic selective sorbents the increase in isotope enrichment factor up to 0.0127 is received. Also, parametrical models that are adequately describing dependence of the pressure difference and the magnitude expansion in the ion-exchange layer from the flow rate and temperature have been obtained. The concentration rate value under the optimum realization conditions of process and depending on type of a selective material changes in a range 1.021÷1.092. Calculated results show agreement with experimental data.

  4. Cognitive aging on latent constructs for visual processing capacity: a novel structural equation modeling framework with causal assumptions based on a theory of visual attention.

    Science.gov (United States)

    Nielsen, Simon; Wilms, L Inge

    2014-01-01

    We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.

  5. Cognitive ageing on latent constructs for visual processing capacity: A novel Structural Equation Modelling framework with causal assumptions based on A Theory of Visual Attention

    Directory of Open Access Journals (Sweden)

    Simon eNielsen

    2015-01-01

    Full Text Available We examined the effects of normal ageing on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive ageing affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modelling (SEM; Model 2, informed by functional structures that were modelled with path analyses in SEM (Model 1. The results show that ageing effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM capacity (Model 2. These results are consistent with some studies reporting selective ageing effects on processing speed, and inconsistent with other studies reporting ageing effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive ageing effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.

  6. A Dynamic Model for Limb Selection

    NARCIS (Netherlands)

    Cox, R.F.A; Smitsman, A.W.

    2008-01-01

    Two experiments and a model on limb selection are reported. In Experiment 1 left-handed and right-handed participants (N = 36) repeatedly used one hand for grasping a small cube. After a clear switch in the cube’s location, perseverative limb selection was revealed in both handedness groups. In

  7. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    International Nuclear Information System (INIS)

    Louit, D.M.; Pascual, R.; Jardine, A.K.S.

    2009-01-01

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  8. Visualizing the process of process modeling with PPMCharts

    NARCIS (Netherlands)

    Claes, J.; Vanderfeesten, I.T.P.; Pinggera, J.; Reijers, H.A.; Weber, B.; Poels, G.; La Rosa, M.; Soffer, P.

    2013-01-01

    In the quest for knowledge about how to make good process models, recent research focus is shifting from studying the quality of process models to studying the process of process modeling (often abbreviated as PPM) itself. This paper reports on our efforts to visualize this specific process in such

  9. Understanding tradeoffs in the supplier selection process : The role of flexibility, delivery, and value-added services/support

    NARCIS (Netherlands)

    Rhee, van der B.; Verma, R.; Plaschka, G.

    2009-01-01

    In this study, we present, based on econometric choice modeling framework, how manufacturing managers/executives trade-off between cost, delivery, flexibility, and service features in the supplier selection process for commodity raw materials, given acceptable quality. Empirical data for this study

  10. Selection of the ''best'' model for converting beta backscatter count readings into thickness measurements

    International Nuclear Information System (INIS)

    Smiriga, N.G.

    1976-01-01

    This report compares two models for converting beta backscatter count readings into thickness measurements. The necessary formulas to be used in an unweighted and weighted regression analysis are listed. The question of whether one should perform a regression analysis using the five available standard thicknesses or whether one should, in addition to these standard thicknesses, use zero as a standard thickness is decided. A weighted regression analysis is compared with an unweighted one for each model. The ''best'' model is selected, and the conclusions of the analysis are presented

  11. Part III: Comparing observed growth of selected test organisms in food irradiation studies with growth predictions calculated by ComBase softwares

    International Nuclear Information System (INIS)

    Farkas, J.; Andrassy, E.; Meszaros, L.; Beczner, J.; Polyak-Feher, K.; Gaal, O.; Lebovics, V.K.; Lugasi, A.

    2009-01-01

    As a result of intensive predictive microbiological modelling activities, several computer programs and softwares became available recently for facilitating microbiological risk assessment. Among these tools, the establishment of the ComBase, an international database and its predictive modelling softwares of the Pathogen Modelling Program (PMP) set up by the USDA Eastern Regional Research Center, Wyndmore, PA, and the Food Micromodel/Growth Predictor by the United Kingdom's Institute of Food Research, Norwich, are most important. The authors have used the PMP 6.1 software version of ComBase as a preliminary trial to compare observed growth of selected test organisms in relation to their food irradiation work during recent years within the FAO/IAEA Coordinated Food Irradiation Research Projects (D6.10.23 and D6.20.07) with the predicted growth on the basis of growth models available in ComBase for the same species as those of the authors' test organisms. The results of challenge tests with Listeria monocytogenes inoculum in untreated or irradiated experimental batches of semi-prepared breaded turkey meat steaks (cordon bleu), sliced tomato, sliced watermelon, sliced cantaloupe and sous vide processed mixed vegetables, as well as Staphylococcus aureus inoculum of a pasta product, tortellini, were compared with their respective growth models under relevant environmental conditions. This comparison showed good fits in the case of non-irradiated and high moisture food samples, but growth of radiation survivors lagged behind the predicted values. (author)

  12. The selective processing of emotional visual stimuli while detecting auditory targets : An ERP analysis

    OpenAIRE

    Schupp, Harald Thomas; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I.; Hamm, Alfons O.

    2008-01-01

    Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapi...

  13. Mindfulness training alters emotional memory recall compared to active controls: support for an emotional information processing model of mindfulness

    Directory of Open Access Journals (Sweden)

    Doug eRoberts-Wolfe

    2012-02-01

    Full Text Available Objectives: While mindfulness-based interventions have received widespread application in both clinical and non-clinical populations, the mechanism by which mindfulness meditation improves well-being remains elusive. One possibility is that mindfulness training alters the processing of emotional information, similar to prevailing cognitive models of depression and anxiety. The aim of this study was to investigating the effects of mindfulness training on emotional information processing (i.e. memory biases in relation to both clinical symptomatology and well-being in comparison to active control conditions.Methods: Fifty-eight university students (28 female, age = 20.1 ± 2.7 years participated in either a 12-week course containing a "meditation laboratory" or an active control course with similar content or experiential practice laboratory format (music. Participants completed an emotional word recall task and self-report questionnaires of well-being and clinical symptoms before and after the 12-week course.Results: Meditators showed greater increases in positive word recall compared to controls F(1, 56 = 6.6, p = .02. The meditation group increased significantly more on measures of well-being [F(1, 56 = 6.6, p = .01], with a marginal decrease in depression and anxiety [(F(1, 56 = 3.0, p = .09] compared to controls. Increased positive word recall was associated with increased psychological well-being [r = 0.31, p = .02] and decreased clinical symptoms [r = -0.29, p = .03].Conclusion: Mindfulness training was associated with greater improvements in processing efficiency for positively valenced stimuli than active control conditions. This change in emotional information processing was associated with improvements in psychological well-being and less depression and anxiety. These data suggest that mindfulness training may improve well-being via changes in emotional information processing.

  14. Modeling nonhomogeneous Markov processes via time transformation.

    Science.gov (United States)

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  15. An integrated knowledge-based and optimization tool for the sustainable selection of wastewater treatment process concepts

    DEFF Research Database (Denmark)

    Castillo, A.; Cheali, Peam; Gómez, V.

    2016-01-01

    The increasing demand on wastewater treatment plants (WWTPs) has involved an interest in improving the alternative treatment selection process. In this study, an integrated framework including an intelligent knowledge-based system and superstructure-based optimization has been developed and applied...... to a real case study. Hence, a multi-criteria analysis together with mathematical models is applied to generate a ranked short-list of feasible treatments for three different scenarios. Finally, the uncertainty analysis performed allows for increasing the quality and robustness of the decisions considering...... benefit and synergy is achieved when both tools are integrated because expert knowledge and expertise are considered together with mathematical models to select the most appropriate treatment alternative...

  16. Statin Selection in Qatar Based on Multi-indication Pharmacotherapeutic Multi-criteria Scoring Model, and Clinician Preference.

    Science.gov (United States)

    Al-Badriyeh, Daoud; Fahey, Michael; Alabbadi, Ibrahim; Al-Khal, Abdullatif; Zaidan, Manal

    2015-12-01

    Statin selection for the largest hospital formulary in Qatar is not systematic, not comparative, and does not consider the multi-indication nature of statins. There are no reports in the literature of multi-indication-based comparative scoring models of statins or of statin selection criteria weights that are based primarily on local clinicians' preferences and experiences. This study sought to comparatively evaluate statins for first-line therapy in Qatar, and to quantify the economic impact of this. An evidence-based, multi-indication, multi-criteria pharmacotherapeutic model was developed for the scoring of statins from the perspective of the main health care provider in Qatar. The literature and an expert panel informed the selection criteria of statins. Relative weighting of selection criteria was based on the input of the relevant local clinician population. Statins were comparatively scored based on literature evidence, with those exceeding a defined scoring threshold being recommended for use. With 95% CI and 5% margin of error, the scoring model was successfully developed. Selection criteria comprised 28 subcriteria under the following main criteria: clinical efficacy, best publish evidence and experience, adverse effects, drug interaction, dosing time, and fixed dose combination availability. Outcome measures for multiple indications were related to effects on LDL cholesterol, HDL cholesterol, triglyceride, total cholesterol, and C-reactive protein. Atorvastatin, pravastatin, and rosuvastatin exceeded defined pharmacotherapeutic thresholds. Atorvastatin and pravastatin were recommended as first-line use and rosuvastatin as a nonformulary alternative. It was estimated that this would produce a 17.6% cost savings in statins expenditure. Sensitivity analyses confirmed the robustness of the evaluation's outcomes against input uncertainties. Incorporating a comparative evaluation of statins in Qatari practices based on a locally developed, transparent, multi

  17. Functional Dual Adaptive Control with Recursive Gaussian Process Model

    International Nuclear Information System (INIS)

    Prüher, Jakub; Král, Ladislav

    2015-01-01

    The paper deals with dual adaptive control problem, where the functional uncertainties in the system description are modelled by a non-parametric Gaussian process regression model. Current approaches to adaptive control based on Gaussian process models are severely limited in their practical applicability, because the model is re-adjusted using all the currently available data, which keeps growing with every time step. We propose the use of recursive Gaussian process regression algorithm for significant reduction in computational requirements, thus bringing the Gaussian process-based adaptive controllers closer to their practical applicability. In this work, we design a bi-criterial dual controller based on recursive Gaussian process model for discrete-time stochastic dynamic systems given in an affine-in-control form. Using Monte Carlo simulations, we show that the proposed controller achieves comparable performance with the full Gaussian process-based controller in terms of control quality while keeping the computational demands bounded. (paper)

  18. Resin infusion of large composite structures modeling and manufacturing process

    Energy Technology Data Exchange (ETDEWEB)

    Loos, A.C. [Michigan State Univ., Dept. of Mechanical Engineering, East Lansing, MI (United States)

    2006-07-01

    The resin infusion processes resin transfer molding (RTM), resin film infusion (RFI) and vacuum assisted resin transfer molding (VARTM) are cost effective techniques for the fabrication of complex shaped composite structures. The dry fibrous preform is placed in the mold, consolidated, resin impregnated and cured in a single step process. The fibrous performs are often constructed near net shape using highly automated textile processes such as knitting, weaving and braiding. In this paper, the infusion processes RTM, RFI and VARTM are discussed along with the advantages of each technique compared with traditional composite fabrication methods such as prepreg tape lay up and autoclave cure. The large number of processing variables and the complex material behavior during infiltration and cure make experimental optimization of the infusion processes costly and inefficient. Numerical models have been developed which can be used to simulate the resin infusion processes. The model formulation and solution procedures for the VARTM process are presented. A VARTM process simulation of a carbon fiber preform was presented to demonstrate the type of information that can be generated by the model and to compare the model predictions with experimental measurements. Overall, the predicted flow front positions, resin pressures and preform thicknesses agree well with the measured values. The results of the simulation show the potential cost and performance benefits that can be realized by using a simulation model as part of the development process. (au)

  19. Microstructural and mechanical approaches of the selective laser melting process applied to a nickel-base superalloy

    International Nuclear Information System (INIS)

    Vilaro, T.; Colin, C.; Bartout, J.D.; Nazé, L.; Sennour, M.

    2012-01-01

    Highlights: ► We examine the as-fabricated microstructure of the Nimonic 263 processed by selective laser melting. ► We optimized heat treatments to modify the microstructure and improve the mechanical properties. ► We tested through tensile tests the various microstructures in order to compare the effects of the heat treatments. - Abstract: This article aims at presenting the Nimonic 263 as-processed microstructure of the selective laser melting which is an innovative process. Because the melting pool is small and the scanning speed of the laser beam is relatively high, the as-processed microstructure is out-of-equilibrium and very typical to additive manufacturing processes. To match the industrial requirement, the microstructures are modified through heat treatments in order to either produce precipitation hardening or relieve the thermal stresses. Tensile tests at room temperature give rise to high mechanical properties close or above those presented by Wang et al. . However, it is noted a strong anisotropy as a function of the building direction of the samples because of the columnar grain growth.

  20. Establishment of selected acute pulmonary thromboembolism model in experimental sheep

    International Nuclear Information System (INIS)

    Fan Jihai; Gu Xiulian; Chao Shengwu; Zhang Peng; Fan Ruilin; Wang Li'na; Wang Lulu; Wang Ling; Li Bo; Chen Taotao

    2010-01-01

    Objective: To establish a selected acute pulmonary thromboembolism model in experimental sheep suitable for animal experiment. Methods: By using Seldinger's technique the catheter sheath was placed in both the femoral vein and femoral artery in ten sheep. Under C-arm DSA guidance the catheter was inserted through the catheter sheath into the pulmonary artery. Via the catheter appropriate amount of sheep autologous blood clots was injected into the selected pulmonary arteries. The selected acute pulmonary thromboembolism model was thus established. Pulmonary angiography was performed to check the results. The pulmonary arterial pressure, femoral artery pressure,heart rates and partial pressure of oxygen in arterial blood (PaO 2 ) were determined both before and after the treatment. The above parameters obtained after the procedure were compared with the recorded parameters measured before the procedure, and the sheep model quality was evaluated. Results: The baseline of pulmonary arterial pressure was (27.30 ± 9.58) mmHg,femoral artery pressure was (126.4 ± 13.72) mmHg, heart rate was (103 ± 15) bpm and PaO 2 was (87.7 ± 12.04) mmHg. Sixty minutes after the injection of (30 ± 5) ml thrombotic agglomerates, the pulmonary arterial pressures rose to (52 ± 49) mmHg, femoral artery pressures dropped to (100 ± 21) mmHg. The heart rates went up to (150 ± 26) bpm. The PaO 2 fell to (25.3 ± 11.2) mmHg. After the procedure the above parameters were significantly different from that measured before the procedure in all ten animals (P < 0.01). The pulmonary arteriography clearly demonstrated that the selected pulmonary arteries were successfully embolized. Conclusion: The anatomy of sheep's femoral veins,vena cava system, pulmonary artery and right heart system are suitable for the establishment of the catheter passage, for this reason, selected acute pulmonary thromboembolism model can be easily created in experimental sheep. The technique is feasible and the model