WorldWideScience

Sample records for model selection process

  1. Ancestral process and diffusion model with selection

    CERN Document Server

    Mano, Shuhei

    2008-01-01

    The ancestral selection graph in population genetics introduced by Krone and Neuhauser (1997) is an analogue to the coalescent genealogy. The number of ancestral particles, backward in time, of a sample of genes is an ancestral process, which is a birth and death process with quadratic death and linear birth rate. In this paper an explicit form of the number of ancestral particle is obtained, by using the density of the allele frequency in the corresponding diffusion model obtained by Kimura (1955). It is shown that fixation is convergence of the ancestral process to the stationary measure. The time to fixation of an allele is studied in terms of the ancestral process.

  2. Model selection for Poisson processes with covariates

    CERN Document Server

    Sart, Mathieu

    2011-01-01

    We observe $n$ inhomogeneous Poisson processes with covariates and aim at estimating their intensities. To handle this problem, we assume that the intensity of each Poisson process is of the form $s (\\cdot, x)$ where $x$ is the covariate and where $s$ is an unknown function. We propose a model selection approach where the models are used to approximate the multivariate function $s$. We show that our estimator satisfies an oracle-type inequality under very weak assumptions both on the intensities and the models. By using an Hellinger-type loss, we establish non-asymptotic risk bounds and specify them under various kind of assumptions on the target function $s$ such as being smooth or composite. Besides, we show that our estimation procedure is robust with respect to these assumptions.

  3. An integrated model for supplier selection process

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In today's highly competitive manufacturing environment, the supplier selection process becomes one of crucial activities in supply chain management. In order to select the best supplier(s) it is not only necessary to continuously tracking and benchmarking performance of suppliers but also to make a tradeoff between tangible and intangible factors some of which may conflict. In this paper an integration of case-based reasoning (CBR), analytical network process (ANP) and linear programming (LP) is proposed to solve the supplier selection problem.

  4. Selection of Temporal Lags When Modeling Economic and Financial Processes.

    Science.gov (United States)

    Matilla-Garcia, Mariano; Ojeda, Rina B; Marin, Manuel Ruiz

    2016-10-01

    This paper suggests new nonparametric statistical tools and procedures for modeling linear and nonlinear univariate economic and financial processes. In particular, the tools presented help in selecting relevant lags in the model description of a general linear or nonlinear time series; that is, nonlinear models are not a restriction. The tests seem to be robust to the selection of free parameters. We also show that the test can be used as a diagnostic tool for well-defined models.

  5. Multicriteria framework for selecting a process modelling language

    Science.gov (United States)

    Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel

    2016-01-01

    The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.

  6. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  7. Process chain modeling and selection in an additive manufacturing context

    DEFF Research Database (Denmark)

    Thompson, Mary Kathryn; Stolfi, Alessandro; Mischkot, Michael

    2016-01-01

    can compete with traditional process chains for small production runs. Combining both types of technology added cost but no benefit in this case. The new process chain model can be used to explain the results and support process selection, but process chain prototyping is still important for rapidly......This paper introduces a new two-dimensional approach to modeling manufacturing process chains. This approach is used to consider the role of additive manufacturing technologies in process chains for a part with micro scale features and no internal geometry. It is shown that additive manufacturing...

  8. A MODEL SELECTION PROCEDURE IN MIXTURE-PROCESS EXPERIMENTS FOR INDUSTRIAL PROCESS OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Márcio Nascimento de Souza Leão

    2015-08-01

    Full Text Available We present a model selection procedure for use in Mixture and Mixture-Process Experiments. Certain combinations of restrictions on the proportions of the mixture components can result in a very constrained experimental region. This results in collinearity among the covariates of the model, which can make it difficult to fit the model using the traditional method based on the significance of the coefficients. For this reason, a model selection methodology based on information criteria will be proposed for process optimization. Two examples are presented to illustrate this model selection procedure.

  9. IT vendor selection model by using structural equation model & analytical hierarchy process

    Science.gov (United States)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  10. Mathematical Model for the Selection of Processing Parameters in Selective Laser Sintering of Polymer Products

    Directory of Open Access Journals (Sweden)

    Ana Pilipović

    2014-03-01

    Full Text Available Additive manufacturing (AM is increasingly applied in the development projects from the initial idea to the finished product. The reasons are multiple, but what should be emphasised is the possibility of relatively rapid manufacturing of the products of complicated geometry based on the computer 3D model of the product. There are numerous limitations primarily in the number of available materials and their properties, which may be quite different from the properties of the material of the finished product. Therefore, it is necessary to know the properties of the product materials. In AM procedures the mechanical properties of materials are affected by the manufacturing procedure and the production parameters. During SLS procedures it is possible to adjust various manufacturing parameters which are used to influence the improvement of various mechanical and other properties of the products. The paper sets a new mathematical model to determine the influence of individual manufacturing parameters on the polymer product made by selective laser sintering. Old mathematical model is checked by statistical method with central composite plan and it is established that old mathematical model must be expanded with new parameter beam overlay ratio. Verification of new mathematical model and optimization of the processing parameters are made on SLS machine.

  11. Econobiophysics - game of choosing. Model of selection or election process with diverse accessible information

    Science.gov (United States)

    2011-01-01

    We propose several models applicable to both selection and election processes when each selecting or electing subject has access to different information about the objects to choose from. We wrote special software to simulate these processes. We consider both the cases when the environment is neutral (natural process) as well as when the environment is involved (controlled process). PMID:21892959

  12. Donald Campbell's Model of the Creative Process: Creativity as Blind Variation and Selective Retention.

    Science.gov (United States)

    Simonton, Dean Keith

    1998-01-01

    This introductory article discusses a blind-variation and selective-retention model of the creative process developed by Donald Campbell. According to Campbell, creativity contains three conditions: a mechanism for introducing variation, a consistent selection process, and a mechanism for preserving and reproducing selected variations. (Author/CR)

  13. A recruitment and selection process model: the case of the Department of Justice and Constitutional Development

    OpenAIRE

    Thebe, T P; 12330841 - Van der Waldt, Gerrit

    2014-01-01

    The purpose of this article is to report on findings of an empirical investigation conducted at the Department of Justice and Constitutional Development. The aim of the investigation was to ascertain the status of current practices and challenges regarding the processes and procedures utilised for recruitment and selection. Based on these findings the article further outlines the design of a comprehensive process model for human resource recruitment and selection for the Department. The model...

  14. Unraveling the sub-processes of selective attention: insights from dynamic modeling and continuous behavior.

    Science.gov (United States)

    Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan

    2015-11-01

    Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition.

  15. A Selective Moving Window Partial Least Squares Method and Its Application in Process Modeling

    Institute of Scientific and Technical Information of China (English)

    Ouguan Xu; Yongfeng Fu; Hongye Su; Lijuan Li

    2014-01-01

    A selective moving window partial least squares (SMW-PLS) soft sensor was proposed in this paper and applied to a hydro-isomerization process for on-line estimation of para-xylene (PX) content. Aiming at the high frequen-cy of model updating in previous recursive PLS methods, a selective updating strategy was developed. The model adaptation is activated once the prediction error is larger than a preset threshold, or the model is kept unchanged. As a result, the frequency of model updating is reduced greatly, while the change of prediction accuracy is minor. The performance of the proposed model is better as compared with that of other PLS-based model. The compro-mise between prediction accuracy and real-time performance can be obtained by regulating the threshold. The guidelines to determine the model parameters are illustrated. In summary, the proposed SMW-PLS method can deal with the slow time-varying processes effectively.

  16. A structured approach for selecting carbon capture process models : A case study on monoethanolamine

    NARCIS (Netherlands)

    van der Spek, Mijndert; Ramirez, Andrea

    2014-01-01

    Carbon capture and storage is considered a promising option to mitigate CO2 emissions. This has resulted in many R&D efforts focusing at developing viable carbon capture technologies. During carbon capture technology development, process modeling plays an important role. Selecting an appropriate pro

  17. A structured approach for selecting carbon capture process models : A case study on monoethanolamine

    NARCIS (Netherlands)

    van der Spek, Mijndert; Ramirez, Andrea

    2014-01-01

    Carbon capture and storage is considered a promising option to mitigate CO2 emissions. This has resulted in many R&D efforts focusing at developing viable carbon capture technologies. During carbon capture technology development, process modeling plays an important role. Selecting an appropriate

  18. A structured approach for selecting carbon capture process models : A case study on monoethanolamine

    NARCIS (Netherlands)

    van der Spek, Mijndert; Ramirez, Andrea

    2014-01-01

    Carbon capture and storage is considered a promising option to mitigate CO2 emissions. This has resulted in many R&D efforts focusing at developing viable carbon capture technologies. During carbon capture technology development, process modeling plays an important role. Selecting an appropriate pro

  19. Selecting a CSR Model: Quality and Implications of the Model Adoption Process

    Science.gov (United States)

    Le Floch, Kerstin Carlson; Zhang, Yu; Kurki, Anja; Herrmann, Suzannah

    2006-01-01

    The process through which a school adopts a comprehensive school reform (CSR) model has been suggested to be a key element in the lifecycle of school reform, contributing to stakeholder buy in and subsequent implementation. We studied the model adoption process, both on a national scale with survey data and in more depth with qualitative case…

  20. Modeling the Temperature Fields of Copper Powder Melting in the Process of Selective Laser Melting

    Science.gov (United States)

    Saprykin, A. A.; Ibragimov, E. A.; Babakova, E. V.

    2016-08-01

    Various process variables influence on the quality of the end product when SLM (Selective Laser Melting) synthesizing items of powder materials. The authors of the paper suggest using the model of distributing the temperature fields when forming single tracks and layers of copper powder PMS-1. Relying on the results of modeling it is proposed to reduce melting of powder particles out of the scanning area.

  1. Design and manufacturing interface modelling for manufacturing processes selection and knowledge synthesis in design

    OpenAIRE

    SKANDER, Achraf; Roucoules, Lionel; KLEIN MEYER, Jean-Sébastien

    2008-01-01

    This research is part of the regional French project IFP2R : " Manufacturing constraints integration in rapid prototyped part design " with IFTS (Higher Technical Formation Institute of Charleville Mézières- France).; The research results presented in this paper are related to the specification of a method and models that tackle the problem of manufacturing processes selection and the integration, as soon as possible, of their constraints in the product modelling (i.e. information synthesis)....

  2. Mathematical Model for the Selection of Processing Parameters in Selective Laser Sintering of Polymer Products

    OpenAIRE

    Ana Pilipović; Igor Drstvenšek; Mladen Šercer

    2014-01-01

    Additive manufacturing (AM) is increasingly applied in the development projects from the initial idea to the finished product. The reasons are multiple, but what should be emphasised is the possibility of relatively rapid manufacturing of the products of complicated geometry based on the computer 3D model of the product. There are numerous limitations primarily in the number of available materials and their properties, which may be quite different from the properties of the material of the fi...

  3. Model of the best-of-N nest-site selection process in honeybees

    Science.gov (United States)

    Reina, Andreagiovanni; Marshall, James A. R.; Trianni, Vito; Bose, Thomas

    2017-05-01

    The ability of a honeybee swarm to select the best nest site plays a fundamental role in determining the future colony's fitness. To date, the nest-site selection process has mostly been modeled and theoretically analyzed for the case of binary decisions. However, when the number of alternative nests is larger than two, the decision-process dynamics qualitatively change. In this work, we extend previous analyses of a value-sensitive decision-making mechanism to a decision process among N nests. First, we present the decision-making dynamics in the symmetric case of N equal-quality nests. Then, we generalize our findings to a best-of-N decision scenario with one superior nest and N -1 inferior nests, previously studied empirically in bees and ants. Whereas previous binary models highlighted the crucial role of inhibitory stop-signaling, the key parameter in our new analysis is the relative time invested by swarm members in individual discovery and in signaling behaviors. Our new analysis reveals conflicting pressures on this ratio in symmetric and best-of-N decisions, which could be solved through a time-dependent signaling strategy. Additionally, our analysis suggests how ecological factors determining the density of suitable nest sites may have led to selective pressures for an optimal stable signaling ratio.

  4. A model of the best-of-N nest-site selection process in honeybees

    CERN Document Server

    Reina, Andreagiovanni; Trianni, Vito; Bose, Thomas

    2016-01-01

    The ability of a honeybee swarm to select the best nest site plays a fundamental role in determining the future colony's fitness. To date, the nest-site selection process has mostly been modelled and theoretically analysed for the case of binary decisions. However, when the number of alternative nests is larger than two, the decision process dynamics qualitatively change. In this work, we extend previous analyses of a value-sensitive decision-making mechanism to a decision process among N nests. First, we present the decision-making dynamics in the symmetric case of N equal-quality nests. Then, we generalise our findings to a best-of-N decision scenario with one superior nest and N-1 inferior nests, previously studied empirically in bees and ants. Whereas previous binary models highlighted the crucial role of inhibitory stop-signalling, the key parameter in our new analysis is the relative time invested by swarm members in individual discovery and in signalling behaviours. Our new analysis reveals conflicting...

  5. Effects of belief and logic on syllogistic reasoning: Eye-movement evidence for selective processing models.

    Science.gov (United States)

    Ball, Linden J; Phillips, Peter; Wade, Caroline N; Quayle, Jeremy D

    2006-01-01

    Studies of syllogistic reasoning have demonstrated a nonlogical tendency for people to endorse more believable conclusions than unbelievable ones. This belief bias effect is more dominant on invalid syllogisms than valid ones, giving rise to a logic by belief interaction. We report an experiment in which participants' eye movements were recorded in order to provide insights into the nature and time course of the reasoning processes associated with manipulations of conclusion validity and believability. Our main dependent measure was people's inspection times for syllogistic premises, and we tested predictions deriving from three contemporary mental-models accounts of the logic by belief interaction. Results supported recent "selective processing" theories of belief bias (e.g., Evans, 2000; Klauer, Musch, & Naumer, 2000), which assume that the believability of a conclusion biases model construction processes, rather than biasing the search for falsifying models (e.g., Oakhill & Johnson-Laird, 1985) or a response stage of reasoning arising from subjective uncertainty (e.g., Quayle & Ball, 2000). We conclude by suggesting that the eye-movement analyses in reasoning research may provide a useful adjunct to other process-tracing techniques such as verbal protocol analysis.

  6. Modeling intermediate product selection under production and storage capacity limitations in food processing

    DEFF Research Database (Denmark)

    Kilic, Onur Alper; Akkerman, Renzo; Grunow, Martin

    2009-01-01

    and processing costs are minimized. However, this product selection process is bound by production and storage capacity limitations, such as the number and size of storage tanks or silos. In this paper, we present a mathematical programming approach that combines decision making on product selection...

  7. A Natural Language Processing-based Model to Automate MRI Brain Protocol Selection and Prioritization.

    Science.gov (United States)

    Brown, Andrew D; Marotta, Thomas R

    2017-02-01

    Incorrect imaging protocol selection can contribute to increased healthcare cost and waste. To help healthcare providers improve the quality and safety of medical imaging services, we developed and evaluated three natural language processing (NLP) models to determine whether NLP techniques could be employed to aid in clinical decision support for protocoling and prioritization of magnetic resonance imaging (MRI) brain examinations. To test the feasibility of using an NLP model to support clinical decision making for MRI brain examinations, we designed three different medical imaging prediction tasks, each with a unique outcome: selecting an examination protocol, evaluating the need for contrast administration, and determining priority. We created three models for each prediction task, each using a different classification algorithm-random forest, support vector machine, or k-nearest neighbor-to predict outcomes based on the narrative clinical indications and demographic data associated with 13,982 MRI brain examinations performed from January 1, 2013 to June 30, 2015. Test datasets were used to calculate the accuracy, sensitivity and specificity, predictive values, and the area under the curve. Our optimal results show an accuracy of 82.9%, 83.0%, and 88.2% for the protocol selection, contrast administration, and prioritization tasks, respectively, demonstrating that predictive algorithms can be used to aid in clinical decision support for examination protocoling. NLP models developed from the narrative clinical information provided by referring clinicians and demographic data are feasible methods to predict the protocol and priority of MRI brain examinations. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  8. HOW DO STUDENTS SELECT SOCIAL NETWORKING SITES? AN ANALYTIC HIERARCHY PROCESS (AHP MODEL

    Directory of Open Access Journals (Sweden)

    Chun Meng Tang

    2015-12-01

    Full Text Available Social networking sites are popular among university students, and students today are indeed spoiled for choice. New emerging social networking sites sprout up amid popular sites, while some existing ones die out. Given the choice of so many social networking sites, how do students decide which one they will sign up for and stay on as an active user? The answer to this question is of interest to social networking site designers and marketers. The market of social networking sites is highly competitive. To maintain the current user base and continue to attract new users, how should social networking sites design their sites? Marketers spend a fairly large percent of their marketing budget on social media marketing. To formulate an effective social media strategy, how much do marketers understand the users of social networking sites? Learning from website evaluation studies, this study intends to provide some answers to these questions by examining how university students decide between two popular social networking sites, Facebook and Twitter. We first developed an analytic hierarchy process (AHP model of four main selection criteria and 12 sub-criteria, and then administered a questionnaire to a group of university students attending a course at a Malaysian university. AHP analyses of the responses from 12 respondents provided an insight into the decision-making process involved in students’ selection of social networking sites. It seemed that of the four main criteria, privacy was the top concern, followed by functionality, usability, and content. The sub-criteria that were of key concern to the students were apps, revenue-generating opportunities, ease of use, and information security. Between Facebook and Twitter, the students thought that Facebook was the better choice. This information is useful for social networking site designers to design sites that are more relevant to their users’ needs, and for marketers to craft more effective

  9. COTS software selection process.

    Energy Technology Data Exchange (ETDEWEB)

    Watkins, William M. (Strike Wire Technologies, Louisville, CO); Lin, Han Wei; McClelland, Kelly (U.S. Security Associates, Livermore, CA); Ullrich, Rebecca Ann; Khanjenoori, Soheil; Dalton, Karen; Lai, Anh Tri; Kuca, Michal; Pacheco, Sandra; Shaffer-Gant, Jessica

    2006-05-01

    Today's need for rapid software development has generated a great interest in employing Commercial-Off-The-Shelf (COTS) software products as a way of managing cost, developing time, and effort. With an abundance of COTS software packages to choose from, the problem now is how to systematically evaluate, rank, and select a COTS product that best meets the software project requirements and at the same time can leverage off the current corporate information technology architectural environment. This paper describes a systematic process for decision support in evaluating and ranking COTS software. Performed right after the requirements analysis, this process provides the evaluators with more concise, structural, and step-by-step activities for determining the best COTS software product with manageable risk. In addition, the process is presented in phases that are flexible to allow for customization or tailoring to meet various projects' requirements.

  10. Numerical modelling of processes that occur in the selective waste disassembly installation

    Science.gov (United States)

    Cherecheş, T.; Lixandru, P.; Dragnea, D.; Cherecheş, D. M.

    2017-08-01

    This paper is the result of the attempts of quantitative approach of some of the processes that are occurring in the selective fragmentation with high voltage pulses installation. It has been formulated a methodology which customizes the general methods for the issue of transient electric field in mixed environments. The electromagnetic processes inside the fragmentation installation, the initiation and formation of the discharge channels, the thermodynamic and mechanical effects in the process vessel are complex, transient and very quick. One of the underlying principles of the fragmentation process consists in the differentiated reaction of materials in an electric field. Generally in the process vessel there can be found together three types of materials: dielectrics, metal, electrolytes. The conductivity of dielectric materials is virtually zero. Metallic materials conduct very well through electronic conductivity. Electrolytes have a more modest conductivity since they conduct through electrochemical processes. The electrical current, in this case, is the movement of ions having sizes and the masses different from the electrons. Here, the electric current includes displacements of ions and molecules, collisions and chemical reactions. Part of the electrical field’s energy is absorbed by the electrolyte in the form of mechanical and chemical energy.

  11. Towards the Significance of Decision Aid in Building Information Modeling (BIM Software Selection Process

    Directory of Open Access Journals (Sweden)

    Omar Mohd Faizal

    2014-01-01

    Full Text Available Building Information Modeling (BIM has been considered as a solution in construction industry to numerous problems such as delays, increased lead in times and increased costs. This is due to the concept and characteristic of BIM that will reshaped the way construction project teams work together to increase productivity and improve the final project outcomes (cost, time, quality, safety, functionality, maintainability, etc.. As a result, the construction industry has witnesses numerous of BIM software available in market. Each of this software has offers different function, features. Furthermore, the adoption of BIM required high investment on software, hardware and also training expenses. Thus, there is indentified that there is a need of decision aid for appropriated BIM software selection that fulfill the project needs. However, research indicates that there is limited study attempt to guide decision in BIM software selection problem. Thus, this paper highlight the importance of decision making and support for BIM software selection as it is vital to increase productivity, construction project throughout building lifecycle.

  12. Evaluation of the TRA ECETOC model for inhalation workplace exposure to different organic solvents for selected process categories.

    Science.gov (United States)

    Kupczewska-Dobecka, Małgorzata; Czerczak, Sławomir; Jakubowski, Marek

    2011-06-01

    The aim of this work is to describe the operation principle of the TRA ECETOC model developed using the descriptor system, and the utilization of that model for assessment of inhalation exposures to different organic solvents for selected process categories identifying a given application. Measurement results were available for toluene, ethyl acetate and acetone in workplace atmosphere in Poland. The following process categories have been postulated: (1) Paints and lacquers factory: use in closed, continuous process with occasional controlled exposure; (2) Shoe factory: roller or brush application of glues; (3) Refinery: use in closed process, no likelihood of exposure. The next step was to calculate the workplace concentration at chosen process categories by applying the TRA ECETOC model. The selected categories do not precisely describe the studied applications. Very high concentration values of acetone were measured in the shoe factory, mean 443 ppm. The concentration obtained with the aid of the model is underestimated, ranging from 25.47 to 254.7 ppm, for the case with and without activation of the local exhaust ventilation (LEV), respectively. Estimated concentration at a level corresponding to that of the measured concentration would be possible if the process category involving spraying, e.g., PROC 7 was considered. For toluene and ethyl acetate, the measured concentrations are within the predicted ranges determined with the use of the model when we assume the concentration predicted with active ventilation for the beginning, and the concentration predicted with inactive ventilation for the end of the range. Model TRA ECETOC can be easily used to assess inhalation exposure at workplace. It has numerous advantages, its structure is clear, requires few data, is available free of charge. Selection of appropriate process categories related to the uses identified is guarantee of successful exposure assessment.

  13. The Administrator Selection Process

    Science.gov (United States)

    Griffin, Michael F.

    1974-01-01

    Proposes that education establish for administrators systematic, rigorous, albeit subjective, selection procedures that recognize the principle of organizational democracy and the public nature of the educational enterprise. (Author/DN)

  14. On selecting a prior for the precision parameter of Dirichlet process mixture models

    Science.gov (United States)

    Dorazio, R.M.

    2009-01-01

    In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.

  15. Preparatory selection of sterilization regime for canned Natural Atlantic Mackerel with oil based on developed mathematical models of the process

    Directory of Open Access Journals (Sweden)

    Maslov A. A.

    2016-12-01

    Full Text Available Definition of preparatory parameters for sterilization regime of canned "Natural Atlantic Mackerel with Oil" is the aim of current study. PRSC software developed at the department of automation and computer engineering is used for preparatory selection. To determine the parameters of process model, in laboratory autoclave AVK-30M the pre-trial process of sterilization and cooling in water with backpressure of canned "Natural Atlantic Mackerel with Oil" in can N 3 has been performed. Gathering information about the temperature in the autoclave sterilization chamber and the can with product has been carried out using Ellab TrackSense PRO loggers. Due to the obtained information three transfer functions for the product model have been identified: in the least heated area of autoclave, the average heated and the most heated. In PRSC programme temporary temperature dependences in the sterilization chamber have been built using this information. The model of sterilization process of canned "Natural Atlantic Mackerel with Oil" has been received after the pre-trial process. Then in the automatic mode the sterilization regime of canned "Natural Atlantic Mackerel with Oil" has been selected using the value of actual effect close to normative sterilizing effect (5.9 conditional minutes. Furthermore, in this study step-mode sterilization of canned "Natural Atlantic Mackerel with Oil" has been selected. Utilization of step-mode sterilization with the maximum temperature equal to 125 °C in the sterilization chamber allows reduce process duration by 10 %. However, the application of this regime in practice requires additional research. Using the described approach based on the developed mathematical models of the process allows receive optimal step and variable canned food sterilization regimes with high energy efficiency and product quality.

  16. Selected Geochemical Data for Modeling Near-Surface Processes in Mineral Systems

    Science.gov (United States)

    Giles, Stuart A.; Granitto, Matthew; Eppinger, Robert G.

    2009-01-01

    The database herein was initiated, designed, and populated to collect and integrate geochemical, geologic, and mineral deposit data in an organized manner to facilitate geoenvironmental mineral deposit modeling. The Microsoft Access database contains data on a variety of mineral deposit types that have variable environmental effects when exposed at the ground surface by mining or natural processes. The data tables describe quantitative and qualitative geochemical analyses determined by 134 analytical laboratory and field methods for over 11,000 heavy-mineral concentrate, rock, sediment, soil, vegetation, and water samples. The database also provides geographic information on geology, climate, ecoregion, and site contamination levels for over 3,000 field sites in North America.

  17. Selection of Prediction Methods for Thermophysical Properties for Process Modeling and Product Design of Biodiesel Manufacturing

    DEFF Research Database (Denmark)

    Su, Yung-Chieh; Liu, Y. A.; Díaz Tovar, Carlos Axel

    2011-01-01

    To optimize biodiesel manufacturing, many reported studies have built simulation models to quantify the relationship between operating conditions and process performance. For mass and energy balance simulations, it is essential to know the four fundamental thermophysical properties of the feed oil......: liquid density (ρL), vapor pressure (Pvap), liquid heat capacity (CPL), and heat of vaporization (ΔHvap). Additionally, to characterize the fuel qualities, it is critical to develop quantitative correlations to predict three biodiesel properties, namely, viscosity, cetane number, and flash point. Also......, to ensure the operability of biodiesel in cold weather, one needs to quantitatively predict three low-temperature flow properties: cloud point (CP), pour point (PP), and cold filter plugging point (CFPP). This article presents the results from a comprehensive evaluation of the methods for predicting...

  18. ARM Mentor Selection Process

    Energy Technology Data Exchange (ETDEWEB)

    Sisterson, D. L. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-10-01

    The Atmospheric Radiation Measurement (ARM) Program was created in 1989 with funding from the U.S. Department of Energy (DOE) to develop several highly instrumented ground stations to study cloud formation processes and their influence on radiative transfer. In 2003, the ARM Program became a national scientific user facility, known as the ARM Climate Research Facility. This scientific infrastructure provides for fixed sites, mobile facilities, an aerial facility, and a data archive available for use by scientists worldwide through the ARM Climate Research Facility—a scientific user facility. The ARM Climate Research Facility currently operates more than 300 instrument systems that provide ground-based observations of the atmospheric column. To keep ARM at the forefront of climate observations, the ARM infrastructure depends heavily on instrument scientists and engineers, also known as lead mentors. Lead mentors must have an excellent understanding of in situ and remote-sensing instrumentation theory and operation and have comprehensive knowledge of critical scale-dependent atmospheric processes. They must also possess the technical and analytical skills to develop new data retrievals that provide innovative approaches for creating research-quality data sets. The ARM Climate Research Facility is seeking the best overall qualified candidate who can fulfill lead mentor requirements in a timely manner.

  19. Supplier Selection Process Using ELECTRE I Decision Model and an Application in the Retail Sector

    Directory of Open Access Journals (Sweden)

    Oğuzhan Yavuz

    2013-12-01

    Full Text Available Supplier selection problem is one of the main topic for the today’s businesses. The supplier selection problem within the supply chain management activities is very important for the businesses, particularly operating in the retail sector. Thus, in this study, the supplier selection problem was discussed in order of importance between energy drinks suppliers of food business in the retail sector. Costs, delivery, quality and flexibility variables were used to select suppliers, and ELECTRE I Method, one of the multiple decision methods, was used to ranking suppliers according to this variables. Which suppliers are more important for the food company was determined by ranking suppliers according to computing net superior values and net inferior values. Results obtained werepresented in tables and certain steps

  20. Learning and Selection Processes

    Directory of Open Access Journals (Sweden)

    Marc Artiga

    2010-06-01

    Full Text Available Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In this paper I defend a teleological explanation of normativity, i. e., I argue that what an organism (or device is supposed to do is determined by its etiological function. In particular, I present a teleological account of the normativity that arises in learning processes, and I defend it from some objections.

  1. Reducing uncertainty in the selection of bi-variate distributions of flood peaks and volumes using copulas and hydrological process-based model selection

    Science.gov (United States)

    Szolgay, Jan; Gaál, Ladislav; Bacigál, Tomáš; Kohnová, Silvia; Blöschl, Günter

    2016-04-01

    Bi-variate distributions of flood peaks and flood event volumes are needed for a range of practical purposes including e.g. retention basin design and identifying extent and duration of flooding in flood hazard zones. However, the selection of the types of bi-variate distributions and estimating their parameters from observed peak-volume pairs are associated with far larger uncertainties compared to uni-variate distributions, since observed flood records of required length are rarely available. This poses a serious problem to reliable flood risk estimation in bi-variate design cases. The aim of this contribution was to shed light on the possibility of reducing uncertainties in the estimation of the dependence models/parameters from a regional perspective. The peak-volume relationships were modeled in terms of copulas. Flood events were classified according to their origin. In order to reduce the uncertainty in estimating flood risk, pooling and analyzing catchments of similar behavior according to flood process types was attempted. Most of the work reported in the literature so far did not direct the multivariate analysis toward discriminating certain types of models regionally according to specific runoff generation processes. Specifically, the contribution addresses these problems: - Are the peak-volume relationships of different flood types for a given catchment similar? - Are the peak-volume dependence structures between catchments in a larger region for given flood types similar? - Are some copula types more suitable for given flood process types and does this have consequences for reliable risk estimation? The target region is located in the northern parts of Austria, and consists of 72 small and mid-sized catchments. Instead of the traditional approach that deals with annual maximum floods, the current analysis includes all independent flood events in the region. 24 872 flood events from the period 1976-2007 were identified, and classified as synoptic, flash

  2. Model Selection for Geostatistical Models

    Energy Technology Data Exchange (ETDEWEB)

    Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.

    2006-02-01

    We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.

  3. Process-based models of feeding and prey selection in larval fish

    DEFF Research Database (Denmark)

    Fiksen, O.; MacKenzie, Brian

    2002-01-01

    rates and prey selection in larval cod. Observed pursuit times of larvae are long and approach velocity slow enough to avoid an escape response from prey, but too short to avoid loss of prey at high turbulence levels. The pause-travel search mode is predicted to promote ingestion of larger prey than...... jig dry wt l(-1). The spatio-temporal fluctuation of turbulence (tidal cycle) and light (sun height) over the bank generates complex structure in the patterns of food intake of larval fish, with different patterns emerging for small and large larvae....

  4. Mathematical Modeling, Simulation and Optimization for Selected Robotic Processes related to Manufacturing of Unique Concrete Elements

    DEFF Research Database (Denmark)

    Cortsen, Jens

    Denne afhandling præsenterer vores arbejde i det danske projekt Unique Concrete Structures (Unikabeton) and EU projekt TailorMade Concrete Structures (TailorCrete) med at automatisere udvalgte processer for konstruktion af unike beton bygninger. Vi har primært fokus på robotfræsning af komplekse...... produktionstiden. De to mindre processer som varmetråd skæring af EPS blokke før fræsning og sprøjtning af slipmiddel på de færdige formwork blokke er også præsenteret efter de to hoved processer. Til sidst præsenterer vi en række af real life betonstrukturer baseret på vores arbejde i denne afhandling...

  5. Social Influence Interpretation of Interpersonal Processes and Team Performance Over Time Using Bayesian Model Selection

    NARCIS (Netherlands)

    Johnson, Alan R.; van de Schoot, Rens; Delmar, Frédéric; Crano, William D.

    2015-01-01

    The team behavior literature is ambiguous about the relations between members’ interpersonal processes—task debate and task conflict—and team performance. From a social influence perspective, we show why members’ interpersonal processes determine team performance over time in small groups. Together,

  6. The model selection in the process of teambuilding for the management of the organization

    Directory of Open Access Journals (Sweden)

    Sergey Petrov

    2010-10-01

    Full Text Available Improving competitiveness of organizations necessary for their success in a market economy is no longer possible only due to material resources. This implies need for qualitatively new approach to human capital. The author reviews approaches to team building and suggests team management model based on situations-cases in which the organized one way or another team reaches goal.

  7. A Generalized Process Model of Human Action Selection and Error and its Application to Error Prediction

    Science.gov (United States)

    2014-07-01

    Macmillan & Creelman , 2005). This is a quite high degree of discriminability and it means that when the decision model predicts a probability of...ROC analysis. Pattern Recognition Letters, 27(8), 861-874. Retrieved from Google Scholar. Macmillan, N. A., & Creelman , C. D. (2005). Detection

  8. Mathematical Modeling, Simulation and Optimization for Selected Robotic Processes related to Manufacturing of Unique Concrete Elements

    DEFF Research Database (Denmark)

    Cortsen, Jens

    to publikationer, hvor den ene beskriver den matematiske model for robotkompensation når en ekstern kraft påvirker robotten. Den anden publikation præsenterer et komplet off-line framework for robotkompensation for high speed fræsning. I afhandlingen præsenterer vi ligeledes en komplet løsning for fremstilling af...... dobbeltkurvede armerings gitter med to samarbejdende robotter, hvor delprocesserne er bøjning, transportering og binding af ameringsstænger. Robotinstallationen er baseret på et off-line simuleringsprogram med dynamisk simulerings support for stangnedbøjning og samtidigt robot control for at reducere...

  9. Sexual selection: Another Darwinian process.

    Science.gov (United States)

    Gayon, Jean

    2010-02-01

    the Darwin-Wallace controversy was that most Darwinian biologists avoided the subject of sexual selection until at least the 1950s, Ronald Fisher being a major exception. This controversy still deserves attention from modern evolutionary biologists, because the modern approach inherits from both Darwin and Wallace. The modern approach tends to present sexual selection as a special aspect of the theory of natural selection, although it also recognizes the big difficulties resulting from the inevitable interaction between these two natural processes of selection. And contra Wallace, it considers mate choice as a major process that deserves a proper evolutionary treatment. The paper's conclusion explains why sexual selection can be taken as a test case for a proper assessment of "Darwinism" as a scientific tradition. Darwin's and Wallace's attitudes towards sexual selection reveal two different interpretations of the principle of natural selection: Wallace's had an environmentalist conception of natural selection, whereas Darwin was primarily sensitive to the element of competition involved in the intimate mechanism of any natural process of selection. Sexual selection, which can lack adaptive significance, reveals this exemplarily.

  10. Library Materials: Selection and Processing.

    Science.gov (United States)

    Freeman, Michael; And Others

    This script of a slide-tape presentation, which describes the selection and processing of materials for a university library, includes commentary with indicators for specific slide placement. Distinction is made between books and serial publications and the materials are followed from the ordering decision through processing. The role of the…

  11. Bayesian Model Selection and Statistical Modeling

    CERN Document Server

    Ando, Tomohiro

    2010-01-01

    Bayesian model selection is a fundamental part of the Bayesian statistical modeling process. The quality of these solutions usually depends on the goodness of the constructed Bayesian model. Realizing how crucial this issue is, many researchers and practitioners have been extensively investigating the Bayesian model selection problem. This book provides comprehensive explanations of the concepts and derivations of the Bayesian approach for model selection and related criteria, including the Bayes factor, the Bayesian information criterion (BIC), the generalized BIC, and the pseudo marginal lik

  12. Recruiter Selection Model

    Science.gov (United States)

    2006-05-01

    interests include feature selection, statistical learning, multivariate statistics, market research, and classification. He may be contacted at...current youth market , and reducing barriers to Army enlistment. Part of the Army Recruiting Initiatives was the creation of a recruiter selection...Selection Model DevelPed by the Openuier Reseach Crate of E...lneSstm Erapseeeng Depce-teo, WViitd Ntt. siliec Academy, NW..t Point, 271 Weau/’itt 21M

  13. Development of Physics-Based Numerical Models for Uncertainty Quantification of Selective Laser Melting Processes - 2015 Annual Progress Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Delplanque, Jean-Pierre [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-08

    The primary goal of the proposed research is to characterize the influence of process parameter variability inherent to Selective Laser Melting (SLM) on components manufactured with the SLM technique for space flight systems and their performance.

  14. Experimental and modeling study of the effect of CH(4) and pulverized coal on selective non-catalytic reduction process.

    Science.gov (United States)

    Zhang, Yanwen; Cai, Ningsheng; Yang, Jingbiao; Xu, Bo

    2008-10-01

    The reduction of nitric oxide using ammonia combined with methane and pulverized coal additives has been studied in a drop tube furnace reactor. Simulated flue gas with 1000 ppm NO(x) and 3.4% excess oxygen was generated by cylinder gas. Experiments were performed in the temperature range of 700-1200 degrees C to investigate the effects of additives on the DeNO(x) performance. Subsequently, a kinetic mechanism was modified and validated based on experimental results, and a computational kinetic modeling with CHEMKIN was conducted to analyze the secondary pollutants. For both methane and pulverized coal additives, the temperature window is shifted towards lower temperatures. The appropriate reaction temperature is shifted to about 900 and 800 degrees C, respectively with 1000 ppm methane and 0.051 g min(-1) pulverized lignite coal. The addition of methane and pulverized coal widens the temperature window towards lower temperature suggesting a low temperature application of the process. Furthermore, selective non-catalytic reduction (SNCR) reaction rate is accelerated evidently with additives and the residence time to complete the reaction is shortened distinctly. NO(x) reduction efficiency with 80% is achieved in about 0.3s without additive at 1000 degrees C. However, it is achieved in only about 0.2s with 100 ppm methane as additive, and only 0.07 and 0.05s are needed respectively for the cases of 500 and 1000 ppm methane. The modified kinetic modeling agrees well with the experimental results and reveals additional information about the process. Investigation on the byproducts where NO(2) and N(2)O were analyzed by modeling and the others were investigated by experimental means indicates that emissions would not increase with methane and pulverized coal additions in SNCR process and the efficacious temperature range of SNCR reaction is widened approximately with 100 degrees C.

  15. Selective attention and the three-process memory model for the interpretation of verbal free recall in amyotrophic lateral sclerosis.

    Science.gov (United States)

    Christidi, Foteini; Zalonis, Ioannis; Smyrnis, Nikolaos; Evdokimidis, Ioannis

    2012-09-01

    The present study investigates selective attention and verbal free recall in amyotrophic lateral sclerosis (ALS) and examines the contribution of selective attention, encoding, consolidation, and retrieval memory processes to patients' verbal free recall. We examined 22 non-demented patients with sporadic ALS and 22 demographically related controls using Stroop Neuropsychological Screening Test (SNST; selective attention) and Rey Auditory Verbal Learning Test (RAVLT; immediate & delayed verbal free recall). The item-specific deficit approach (ISDA) was applied to RAVLT to evaluate encoding, consolidation, and retrieval difficulties. ALS patients performed worse than controls on SNST (p recall (p recall, with encoding (p = .016), consolidation (p recall, with consolidation (p recall. Concluding, selective attention and the memory processes of encoding, consolidation, and retrieval should be considered while interpreting patients' impaired free recall. (JINS, 2012, 18, 1-10).

  16. Fusion strategies for selecting multiple tuning parameters for multivariate calibration and other penalty based processes: A model updating application for pharmaceutical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Tencate, Alister J. [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); Kalivas, John H., E-mail: kalijohn@isu.edu [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); White, Alexander J. [Department of Physics and Optical Engineering, Rose-Hulman Institute of Technology, Terre Huate, IN 47803 (United States)

    2016-05-19

    New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of

  17. Bayesian Evidence and Model Selection

    CERN Document Server

    Knuth, Kevin H; Malakar, Nabin K; Mubeen, Asim M; Placek, Ben

    2014-01-01

    In this paper we review the concept of the Bayesian evidence and its application to model selection. The theory is presented along with a discussion of analytic, approximate and numerical techniques. Application to several practical examples within the context of signal processing are discussed.

  18. Selective Maintenance Model Considering Time Uncertainty

    OpenAIRE

    Le Chen; Zhengping Shu; Yuan Li; Xuezhi Lv

    2012-01-01

    This study proposes a selective maintenance model for weapon system during mission interval. First, it gives relevant definitions and operational process of material support system. Then, it introduces current research on selective maintenance modeling. Finally, it establishes numerical model for selecting corrective and preventive maintenance tasks, considering time uncertainty brought by unpredictability of maintenance procedure, indetermination of downtime for spares and difference of skil...

  19. Fusion strategies for selecting multiple tuning parameters for multivariate calibration and other penalty based processes: A model updating application for pharmaceutical analysis.

    Science.gov (United States)

    Tencate, Alister J; Kalivas, John H; White, Alexander J

    2016-05-19

    New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of

  20. Neural inhibition enables selection during language processing.

    Science.gov (United States)

    Snyder, Hannah R; Hutchison, Natalie; Nyhus, Erika; Curran, Tim; Banich, Marie T; O'Reilly, Randall C; Munakata, Yuko

    2010-09-21

    Whether grocery shopping or choosing words to express a thought, selecting between options can be challenging, especially for people with anxiety. We investigate the neural mechanisms supporting selection during language processing and its breakdown in anxiety. Our neural network simulations demonstrate a critical role for competitive, inhibitory dynamics supported by GABAergic interneurons. As predicted by our model, we find that anxiety (associated with reduced neural inhibition) impairs selection among options and associated prefrontal cortical activity, even in a simple, nonaffective verb-generation task, and the GABA agonist midazolam (which increases neural inhibition) improves selection, whereas retrieval from semantic memory is unaffected when selection demands are low. Neural inhibition is key to choosing our words.

  1. Business Model Process Configurations

    DEFF Research Database (Denmark)

    Taran, Yariv; Nielsen, Christian; Thomsen, Peter

    2015-01-01

    Purpose – The paper aims: 1) To develop systematically a structural list of various business model process configuration and to group (deductively) these selected configurations in a structured typological categorization list. 2) To facilitate companies in the process of BM innovation......, by developing (inductively) an ontological classification framework, in view of the BM process configurations typology developed. Design/methodology/approach – Given the inconsistencies found in the business model studies (e.g. definitions, configurations, classifications) we adopted the analytical induction...... method of data analysis. Findings - A comprehensive literature review and analysis resulted in a list of business model process configurations systematically organized under five classification groups, namely, revenue model; value proposition; value configuration; target customers, and strategic...

  2. Process for selecting engineering tools : applied to selecting a SysML tool.

    Energy Technology Data Exchange (ETDEWEB)

    De Spain, Mark J.; Post, Debra S. (Sandia National Laboratories, Livermore, CA); Taylor, Jeffrey L.; De Jong, Kent

    2011-02-01

    Process for Selecting Engineering Tools outlines the process and tools used to select a SysML (Systems Modeling Language) tool. The process is general in nature and users could use the process to select most engineering tools and software applications.

  3. Process for selecting engineering tools : applied to selecting a SysML tool.

    Energy Technology Data Exchange (ETDEWEB)

    De Spain, Mark J.; Post, Debra S. (Sandia National Laboratories, Livermore, CA); Taylor, Jeffrey L.; De Jong, Kent

    2011-02-01

    Process for Selecting Engineering Tools outlines the process and tools used to select a SysML (Systems Modeling Language) tool. The process is general in nature and users could use the process to select most engineering tools and software applications.

  4. Physics-based simulation modeling and optimization of microstructural changes induced by machining and selective laser melting processes in titanium and nickel based alloys

    Science.gov (United States)

    Arisoy, Yigit Muzaffer

    Manufacturing processes may significantly affect the quality of resultant surfaces and structural integrity of the metal end products. Controlling manufacturing process induced changes to the product's surface integrity may improve the fatigue life and overall reliability of the end product. The goal of this study is to model the phenomena that result in microstructural alterations and improve the surface integrity of the manufactured parts by utilizing physics-based process simulations and other computational methods. Two different (both conventional and advanced) manufacturing processes; i.e. machining of Titanium and Nickel-based alloys and selective laser melting of Nickel-based powder alloys are studied. 3D Finite Element (FE) process simulations are developed and experimental data that validates these process simulation models are generated to compare against predictions. Computational process modeling and optimization have been performed for machining induced microstructure that includes; i) predicting recrystallization and grain size using FE simulations and the Johnson-Mehl-Avrami-Kolmogorov (JMAK) model, ii) predicting microhardness using non-linear regression models and the Random Forests method, and iii) multi-objective machining optimization for minimizing microstructural changes. Experimental analysis and computational process modeling of selective laser melting have been also conducted including; i) microstructural analysis of grain sizes and growth directions using SEM imaging and machine learning algorithms, ii) analysis of thermal imaging for spattering, heating/cooling rates and meltpool size, iii) predicting thermal field, meltpool size, and growth directions via thermal gradients using 3D FE simulations, iv) predicting localized solidification using the Phase Field method. These computational process models and predictive models, once utilized by industry to optimize process parameters, have the ultimate potential to improve performance of

  5. Selected Topics on Systems Modeling and Natural Language Processing: Editorial Introduction to the Issue 7 of CSIMQ

    Directory of Open Access Journals (Sweden)

    Witold Andrzejewski

    2016-07-01

    Full Text Available The seventh issue of Complex Systems Informatics and Modeling Quarterly presents five papers devoted to two distinct research topics: systems modeling and natural language processing (NLP. Both of these subjects are very important in computer science. Through modeling we can simplify the studied problem by concentrating on only one aspect at a time. Moreover, a properly constructed model allows the modeler to work on higher levels of abstraction and not having to concentrate on details. Since the size and complexity of information systems grows rapidly, creating good models of such systems is crucial. The analysis of natural language is slowly becoming a widely used tool in commerce and day to day life. Opinion mining allows recommender systems to provide accurate recommendations based on user-generated reviews. Speech recognition and NLP are the basis for such widely used personal assistants as Apple’s Siri, Microsoft’s Cortana, and Google Now. While a lot of work has already been done on natural language processing, the research usually concerns widely used languages, such as English. Consequently, natural language processing in languages other than English is very relevant subject and is addressed in this issue.

  6. Investigation for improving Global Positioning System (GPS) orbits using a discrete sequential estimator and stochastic models of selected physical processes

    Science.gov (United States)

    Goad, Clyde C.; Chadwell, C. David

    1993-01-01

    GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the

  7. 15 CFR 2301.18 - Selection process.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Selection process. 2301.18 Section 2301.18 Commerce and Foreign Trade Regulations Relating to Telecommunications and Information NATIONAL... PROGRAM Evaluation and Selection Process § 2301.18 Selection process. (a) The PTFP Director will...

  8. Attentional spreading to task-irrelevant object features: experimental support and a 3-step model of attention for object-based selection and feature-based processing modulation.

    Science.gov (United States)

    Wegener, Detlef; Galashan, Fingal Orlando; Aurich, Maike Kathrin; Kreiter, Andreas Kurt

    2014-01-01

    Directing attention to a specific feature of an object has been linked to different forms of attentional modulation. Object-based attention theory founds on the finding that even task-irrelevant features at the selected object are subject to attentional modulation, while feature-based attention theory proposes a global processing benefit for the selected feature even at other objects. Most studies investigated either the one or the other form of attention, leaving open the possibility that both object- and feature-specific attentional effects do occur at the same time and may just represent two sides of a single attention system. We here investigate this issue by testing attentional spreading within and across objects, using reaction time (RT) measurements to changes of attended and unattended features on both attended and unattended objects. We asked subjects to report color and speed changes occurring on one of two overlapping random dot patterns (RDPs), presented at the center of gaze. The key property of the stimulation was that only one of the features (e.g., motion direction) was unique for each object, whereas the other feature (e.g., color) was shared by both. The results of two experiments show that co-selection of unattended features even occurs when those features have no means for selecting the object. At the same time, they demonstrate that this processing benefit is not restricted to the selected object but spreads to the task-irrelevant one. We conceptualize these findings by a 3-step model of attention that assumes a task-dependent top-down gain, object-specific feature selection based on task- and binding characteristics, and a global feature-specific processing enhancement. The model allows for the unification of a vast amount of experimental results into a single model, and makes various experimentally testable predictions for the interaction of object- and feature-specific processes.

  9. Attentional spreading to task-irrelevant object features: Experimental support and a 3-step model of attention for object-based selection and feature-based processing modulation

    Directory of Open Access Journals (Sweden)

    Detlef eWegener

    2014-06-01

    Full Text Available Directing attention to a specific feature of an object has been linked to different forms of attentional modulation. Object-based attention theory founds on the finding that even task-irrelevant features at the selected object are subject to attentional modulation, while feature-based attention theory proposes a global processing benefit for the selected feature even at other objects. Most studies investigated either the one or the other form of attention, leaving open the possibility that both object- and feature-specific attentional effects do occur at the same time and may just represent two sides of a single attention system. We here investigate this issue by testing attentional spreading within and across objects, using reaction time measurements to changes of attended and unattended features on both attended and unattended objects. We asked subjects to report color and speed changes occurring on one of two overlapping random dot patterns, presented at the center of gaze. The key property of the stimulation was that only one of the features (e.g. motion direction was unique for each object, whereas the other feature (e.g. color was shared by both. The results of two experiments show that co-selection of unattended features even occurs when those features have no means for selecting the object. At the same time, they demonstrate that this processing benefit is not restricted to the selected object but spreads to the task-irrelevant one. We conceptualize these findings by a 3-step model of attention that assumes a task-dependent top-down gain, object-specific feature selection based on task- and binding characteristics, and a global feature-specific processing enhancement. The model allows for the unification of a vast amount of experimental results into a single model, and makes various experimentally testable predictions for the interaction of object- and feature-specific processes.

  10. Selected soil thermal conductivity models

    Directory of Open Access Journals (Sweden)

    Rerak Monika

    2017-01-01

    Full Text Available The paper presents collected from the literature models of soil thermal conductivity. This is a very important parameter, which allows one to assess how much heat can be transferred from the underground power cables through the soil. The models are presented in table form, thus when the properties of the soil are given, it is possible to select the most accurate method of calculating its thermal conductivity. Precise determination of this parameter results in designing the cable line in such a way that it does not occur the process of cable overheating.

  11. A comparison of the psychological refractory period and prioritized processing paradigms: Can the response-selection bottleneck model explain them both?

    Science.gov (United States)

    Miller, Jeff; Durst, Moritz

    2015-10-01

    Four experiments examined whether well-established phenomena from the psychological refractory period (PRP) paradigm are also observed in the prioritized processing paradigm, as would be expected from a common description of the 2 paradigms with the response selection bottleneck (RSB) model. Consistent with a generalization of the RSB model to the prioritized processing paradigm, Experiments 1 and 2 showed that this paradigm yields effects of SOA and stimulus discriminability analogous to those observed in the PRP paradigm. In Experiments 3 and 4, however, overall RTs and effect sizes differed between the PRP and prioritized processing paradigms in ways that are difficult to explain within the RSB model. Understanding the differences between these 2 paradigms offers considerable promise as a way to extend the RSB model beyond the domain of the PRP paradigm and to generalize our understanding of multitasking interference.

  12. Innovation During the Supplier Selection Process

    DEFF Research Database (Denmark)

    Pilkington, Alan; Pedraza, Isabel

    2014-01-01

    Established ideas on supplier selection have not moved much from the original premise of how to choose between bidders. Whilst we have added many different tools and refinements to choose between alternative suppliers, its nature has not evolved. We move the original selection process approach...... observed through an ethnographic embedded researcher study has refined the selection process and has two selection stages one for first supply covering tool/process developed and another later for resupply of mature parts. We report the details of the process, those involved, the criteria employed...... and identify benefits and weaknesses of this enhanced selection process....

  13. Complexity regularized hydrological model selection

    NARCIS (Netherlands)

    Pande, S.; Arkesteijn, L.; Bastidas, L.A.

    2014-01-01

    This paper uses a recently proposed measure of hydrological model complexity in a model selection exercise. It demonstrates that a robust hydrological model is selected by penalizing model complexity while maximizing a model performance measure. This especially holds when limited data is available.

  14. Complexity regularized hydrological model selection

    NARCIS (Netherlands)

    Pande, S.; Arkesteijn, L.; Bastidas, L.A.

    2014-01-01

    This paper uses a recently proposed measure of hydrological model complexity in a model selection exercise. It demonstrates that a robust hydrological model is selected by penalizing model complexity while maximizing a model performance measure. This especially holds when limited data is available.

  15. Individual Influence on Model Selection

    Science.gov (United States)

    Sterba, Sonya K.; Pek, Jolynn

    2012-01-01

    Researchers in psychology are increasingly using model selection strategies to decide among competing models, rather than evaluating the fit of a given model in isolation. However, such interest in model selection outpaces an awareness that one or a few cases can have disproportionate impact on the model ranking. Though case influence on the fit…

  16. Information Selection in Intelligence Processing

    Science.gov (United States)

    2011-12-01

    problem of overload.” As another example, Whaley (Whaley, 1974) argues that one of the causes for the Pearl Harbor and Barbarossa strategic surprises is...which becomes more and more important as the Internet evolves. The IR problem and the information selection problem share some similar...all the algorithms tend more towards exploration: the temperature parameter in Softmax is higher (0.12 instead of 0.08), the delta for the VDBE

  17. Selecting a plutonium vitrification process

    Energy Technology Data Exchange (ETDEWEB)

    Jouan, A. [Centre d`Etudes de la Vallee du Rhone, Bagnols sur Ceze (France)

    1996-05-01

    Vitrification of plutonium is one means of mitigating its potential danger. This option is technically feasible, even if it is not the solution advocated in France. Two situations are possible, depending on whether or not the glass matrix also contains fission products; concentrations of up to 15% should be achievable for plutonium alone, whereas the upper limit is 3% in the presence of fission products. The French continuous vitrification process appears to be particularly suitable for plutonium vitrification: its capacity is compatible with the required throughout, and the compact dimensions of the process equipment prevent a criticality hazard. Preprocessing of plutonium metal, to convert it to PuO{sub 2} or to a nitric acid solution, may prove advantageous or even necessary depending on whether a dry or wet process is adopted. The process may involve a single step (vitrification of Pu or PuO{sub 2} mixed with glass frit) or may include a prior calcination step - notably if the plutonium is to be incorporated into a fission product glass. It is important to weigh the advantages and drawbacks of all the possible options in terms of feasibility, safety and cost-effectiveness.

  18. ARM Lead Mentor Selection Process

    Energy Technology Data Exchange (ETDEWEB)

    Sisterson, DL

    2013-03-13

    The ARM Climate Research Facility currently operates more than 300 instrument systems that provide ground-based observations of the atmospheric column. To keep ARM at the forefront of climate observations, the ARM infrastructure depends heavily on instrument scientists and engineers, also known as Instrument Mentors. Instrument Mentors must have an excellent understanding of in situ and remote-sensing instrumentation theory and operation and have comprehensive knowledge of critical scale-dependent atmospheric processes. They also possess the technical and analytical skills to develop new data retrievals that provide innovative approaches for creating research-quality data sets.

  19. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip;

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent...

  20. UML in business process modeling

    Directory of Open Access Journals (Sweden)

    Bartosz Marcinkowski

    2013-03-01

    Full Text Available Selection and proper application of business process modeling methods and techniques have a significant impact on organizational improvement capabilities as well as proper understanding of functionality of information systems that shall support activity of the organization. A number of business process modeling notations were popularized in practice in recent decades. Most significant of the notations include Business Process Modeling Notation (OMG BPMN and several Unified Modeling Language (OMG UML extensions. In this paper, the assessment whether one of the most flexible and strictly standardized contemporary business process modeling notations, i.e. Rational UML Profile for Business Modeling, enable business analysts to prepare business models that are all-embracing and understandable by all the stakeholders. After the introduction, methodology of research is discussed. Section 2 presents selected case study results. The paper is concluded with a summary.

  1. 44 CFR 150.7 - Selection process.

    Science.gov (United States)

    2010-10-01

    ... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Selection process. 150.7 Section 150.7 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF... Selection process. (a) President's Award. Nominations for the President's Award shall be reviewed,...

  2. 7 CFR 3570.68 - Selection process.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Selection process. 3570.68 Section 3570.68 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE, DEPARTMENT OF AGRICULTURE COMMUNITY PROGRAMS Community Facilities Grant Program § 3570.68 Selection process. Each...

  3. Selected System Models

    Science.gov (United States)

    Schmidt-Eisenlohr, F.; Puñal, O.; Klagges, K.; Kirsche, M.

    Apart from the general issue of modeling the channel, the PHY and the MAC of wireless networks, there are specific modeling assumptions that are considered for different systems. In this chapter we consider three specific wireless standards and highlight modeling options for them. These are IEEE 802.11 (as example for wireless local area networks), IEEE 802.16 (as example for wireless metropolitan networks) and IEEE 802.15 (as example for body area networks). Each section on these three systems discusses also at the end a set of model implementations that are available today.

  4. Bubble point pressures of the selected model system for CatLiq® bio-oil process

    DEFF Research Database (Denmark)

    Toor, Saqib Sohail; Rosendahl, Lasse; Baig, Muhammad Noman;

    2010-01-01

    The CatLiq® process is a second generation catalytic liquefaction process for the production of bio-oil from WDGS (Wet Distillers Grains with Solubles) at subcritical conditions (280-350 oC and 225-250 bar) in the presence of a homogeneous alkaline and a heterogeneous Zirconia catalyst. In this w...

  5. Bubble point pressures of the selected model system for CatLiq® bio-oil process

    DEFF Research Database (Denmark)

    Toor, Saqib Sohail; Rosendahl, Lasse; Baig, Muhammad Noman

    2010-01-01

    The CatLiq® process is a second generation catalytic liquefaction process for the production of bio-oil from WDGS (Wet Distillers Grains with Solubles) at subcritical conditions (280-350 oC and 225-250 bar) in the presence of a homogeneous alkaline and a heterogeneous Zirconia catalyst. In this w...

  6. Roughness parameter selection for novel manufacturing processes.

    Science.gov (United States)

    Ham, M; Powers, B M

    2014-01-01

    This work proposes a method of roughness parameter (RP) selection for novel manufacturing processes or processes where little knowledge exists about which RPs are important. The method selects a single parameter to represent a group of highly correlated parameters. Single point incremental forming (SPIF) is used as the case study for the manufacturing process. This methodology was successful in reducing the number of RPs investigated from 18 to 8 in the case study. © Wiley Periodicals, Inc.

  7. Launch vehicle selection model

    Science.gov (United States)

    Montoya, Alex J.

    1990-01-01

    Over the next 50 years, humans will be heading for the Moon and Mars to build scientific bases to gain further knowledge about the universe and to develop rewarding space activities. These large scale projects will last many years and will require large amounts of mass to be delivered to Low Earth Orbit (LEO). It will take a great deal of planning to complete these missions in an efficient manner. The planning of a future Heavy Lift Launch Vehicle (HLLV) will significantly impact the overall multi-year launching cost for the vehicle fleet depending upon when the HLLV will be ready for use. It is desirable to develop a model in which many trade studies can be performed. In one sample multi-year space program analysis, the total launch vehicle cost of implementing the program reduced from 50 percent to 25 percent. This indicates how critical it is to reduce space logistics costs. A linear programming model has been developed to answer such questions. The model is now in its second phase of development, and this paper will address the capabilities of the model and its intended uses. The main emphasis over the past year was to make the model user friendly and to incorporate additional realistic constraints that are difficult to represent mathematically. We have developed a methodology in which the user has to be knowledgeable about the mission model and the requirements of the payloads. We have found a representation that will cut down the solution space of the problem by inserting some preliminary tests to eliminate some infeasible vehicle solutions. The paper will address the handling of these additional constraints and the methodology for incorporating new costing information utilizing learning curve theory. The paper will review several test cases that will explore the preferred vehicle characteristics and the preferred period of construction, i.e., within the next decade, or in the first decade of the next century. Finally, the paper will explore the interaction

  8. 3-D analysis of grain selection process

    Science.gov (United States)

    Arao, Tomoka; Esaka, Hisao; Shinozuka, Kei

    2012-07-01

    It is known that the grain selection plays an important role in the manufacturing process for turbine blades. There are some analytical or numerical models to treat the grain selection. However, the detailed mechanism of grain selection in 3-D is still uncertain. Therefore, an experimental research work using Al-Cu alloy has been carried out in order to understand the grain selection in 3-D.A mold made by Al2O3 was heated to 600 °C ( = liquids temperature of the alloy) and was set on a water-colded copper chill plate. Molten Al-20 wt%Cu alloy was cast into the mold and unidirectional solidified ingot was prepared. The size of ingot was approximately phi25×65H mm. To obtain the thermal history, 4 thermocouples were placed in the mold. It is confirmed that the alloy solidified unidirectionally from bottom to top. Solidified structure on a longitudinal cross section was observed and unidirectional solidification up to 40 mm was ensured. EBSD analysis has been performed on horizontal cross section at an interval of ca.200 μm. These observations were carried out 7-5 mm from the bottom surface. Crystallographic orientation of primary Al phase and size of solidified grains were characterized. A large solidified grain, the crystallographic orientation of which is approximately along heat flow direction, is observed near the lowest cross section. The area of grain decreased as solidification proceeded. On the other hand, it is found that the area of grain increased.

  9. Predictive modeling, simulation, and optimization of laser processing techniques: UV nanosecond-pulsed laser micromachining of polymers and selective laser melting of powder metals

    Science.gov (United States)

    Criales Escobar, Luis Ernesto

    One of the most frequently evolving areas of research is the utilization of lasers for micro-manufacturing and additive manufacturing purposes. The use of laser beam as a tool for manufacturing arises from the need for flexible and rapid manufacturing at a low-to-mid cost. Laser micro-machining provides an advantage over mechanical micro-machining due to the faster production times of large batch sizes and the high costs associated with specific tools. Laser based additive manufacturing enables processing of powder metals for direct and rapid fabrication of products. Therefore, laser processing can be viewed as a fast, flexible, and cost-effective approach compared to traditional manufacturing processes. Two types of laser processing techniques are studied: laser ablation of polymers for micro-channel fabrication and selective laser melting of metal powders. Initially, a feasibility study for laser-based micro-channel fabrication of poly(dimethylsiloxane) (PDMS) via experimentation is presented. In particular, the effectiveness of utilizing a nanosecond-pulsed laser as the energy source for laser ablation is studied. The results are analyzed statistically and a relationship between process parameters and micro-channel dimensions is established. Additionally, a process model is introduced for predicting channel depth. Model outputs are compared and analyzed to experimental results. The second part of this research focuses on a physics-based FEM approach for predicting the temperature profile and melt pool geometry in selective laser melting (SLM) of metal powders. Temperature profiles are calculated for a moving laser heat source to understand the temperature rise due to heating during SLM. Based on the predicted temperature distributions, melt pool geometry, i.e. the locations at which melting of the powder material occurs, is determined. Simulation results are compared against data obtained from experimental Inconel 625 test coupons fabricated at the National

  10. Model Selection Principles in Misspecified Models

    CERN Document Server

    Lv, Jinchi

    2010-01-01

    Model selection is of fundamental importance to high dimensional modeling featured in many contemporary applications. Classical principles of model selection include the Kullback-Leibler divergence principle and the Bayesian principle, which lead to the Akaike information criterion and Bayesian information criterion when models are correctly specified. Yet model misspecification is unavoidable when we have no knowledge of the true model or when we have the correct family of distributions but miss some true predictor. In this paper, we propose a family of semi-Bayesian principles for model selection in misspecified models, which combine the strengths of the two well-known principles. We derive asymptotic expansions of the semi-Bayesian principles in misspecified generalized linear models, which give the new semi-Bayesian information criteria (SIC). A specific form of SIC admits a natural decomposition into the negative maximum quasi-log-likelihood, a penalty on model dimensionality, and a penalty on model miss...

  11. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.

    2012-03-01

    Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.

  12. Expert System Model for Educational Personnel Selection

    Directory of Open Access Journals (Sweden)

    Héctor A. Tabares-Ospina

    2013-06-01

    Full Text Available The staff selection is a difficult task due to the subjectivity that the evaluation means. This process can be complemented using a system to support decision. This paper presents the implementation of an expert system to systematize the selection process of professors. The management of software development is divided into 4 parts: requirements, design, implementation and commissioning. The proposed system models a specific knowledge through relationships between variables evidence and objective.

  13. Selective detachment process in column flotation froth

    Energy Technology Data Exchange (ETDEWEB)

    Honaker, R.Q.; Ozsever, A.V.; Parekh, B.K. [University of Kentucky, Lexington, KY (United States). Dept. of Mining Engineering

    2006-05-15

    The selectivity in flotation columns involving the separation of particles of varying degrees of floatability is based on differential flotation rates in the collection zone, reflux action between the froth and collection zones, and differential detachment rates in the froth zone. Using well-known theoretical models describing the separation process and experimental data, froth zone and overall flotation recovery values were quantified for particles in an anthracite coal that have a wide range of floatability potential. For highly floatable particles, froth recovery had a very minimal impact on overall recovery while the recovery of weakly floatable material was decreased substantially by reductions in froth recovery values. In addition, under carrying-capacity limiting conditions, selectivity was enhanced by the preferential detachment of the weakly floatable material. Based on this concept, highly floatable material was added directly into the froth zone when treating the anthracite coal. The enriched froth phase reduced the product ash content of the anthracite product by five absolute percentage points while maintaining a constant recovery value.

  14. Modeling multiphase materials processes

    CERN Document Server

    Iguchi, Manabu

    2010-01-01

    ""Modeling Multiphase Materials Processes: Gas-Liquid Systems"" describes the methodology and application of physical and mathematical modeling to multi-phase flow phenomena in materials processing. The book focuses on systems involving gas-liquid interaction, the most prevalent in current metallurgical processes. The performance characteristics of these processes are largely dependent on transport phenomena. This volume covers the inherent characteristics that complicate the modeling of transport phenomena in such systems, including complex multiphase structure, intense turbulence, opacity of

  15. Material and process selection using product examples

    DEFF Research Database (Denmark)

    Lenau, Torben Anker

    2002-01-01

    that designers often limit their selection of materials and processes to a few well-known ones. Designers need to expand the solution space by considering more materials and processes. But they have to be convinced that the materials and processes are likely candidates that are worth investing time in exploring...... a search engine, and through hyperlinks can relevant materials and processes be explored. Realising that designers are very sensitive to user interfaces do all descriptions of materials, processes and products include graphical descriptions, i.e. pictures or computer graphics.......The objective of the paper is to suggest a different procedure for selecting materials and processes within the product development work. The procedure includes using product examples in order to increase the number of alternative materials and processes that is considered. Product examples can...

  16. Material and process selection using product examples

    DEFF Research Database (Denmark)

    Lenau, Torben Anker

    2001-01-01

    that designers often limit their selection of materials and processes to a few well-known ones. Designers need to expand the solution space by considering more materials and processes. But they have to be convinced that the materials and processes are likely candidates that are worth investing time in exploring...... a search engine, and through hyperlinks can relevant materials and processes be explored. Realising that designers are very sensitive to user interfaces do all descriptions of materials, processes and products include graphical descriptions, i.e. pictures or computer graphics.......The objective of the paper is to suggest a different procedure for selecting materials and processes within the product development work. The procedure includes using product examples in order to increase the number of alternative materials and processes that is considered. Product examples can...

  17. Product Development Process Modeling

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    The use of Concurrent Engineering and other modern methods of product development and maintenance require that a large number of time-overlapped "processes" be performed by many people. However, successfully describing and optimizing these processes are becoming even more difficult to achieve. The perspective of industrial process theory (the definition of process) and the perspective of process implementation (process transition, accumulation, and inter-operations between processes) are used to survey the method used to build one base model (multi-view) process model.

  18. Socio-cultural models as an important element of the site selection process in rural waste management

    Directory of Open Access Journals (Sweden)

    Nenković-Riznić Marina

    2011-01-01

    Full Text Available The problem of waste management in rural areas has not been the subject of detailed specific researches since most of the research has been directed towards the study of means, mechanisms and procedures of waste elimination in urban settlements. The reason for the reduced scope of research in this field lies in the fact that rural settlements cannot be considered as "grateful" subjects due to usual deficiency of specific data (population number, fluctuations, amount of waste, waste composition, methods of waste elimination, etc.. In addition, for several decades the villages have primarily eliminated waste spontaneously. This has proven difficult to research because of the variations of methods applied to each specific locale, as well as different environmental variables. These criteria are based on patterns of behavior, customs and habits of the local population, but they also insist on absolute participation of local stakeholders in waste management. On the other hand, although Serbia has a legislative frame which is fully harmonized with European laws, there is a problem within unclearly defined waste management system which is oriented mainly on rural areas. The reason for this is the fact that waste management in rural areas is the part of regional waste management, and does not operate independently from the system in "urban" areas. However, since rural areas require the construction of recycling yards, this paper will present a new methodology, which equally valuates techno-economic criteria and social criteria in determining waste elimination locations. This paper will also point out varieties of actors in the process of waste elimination in rural areas, as well as the possibility of their participation.

  19. Introduction. Modelling natural action selection.

    Science.gov (United States)

    Prescott, Tony J; Bryson, Joanna J; Seth, Anil K

    2007-09-29

    Action selection is the task of resolving conflicts between competing behavioural alternatives. This theme issue is dedicated to advancing our understanding of the behavioural patterns and neural substrates supporting action selection in animals, including humans. The scope of problems investigated includes: (i) whether biological action selection is optimal (and, if so, what is optimized), (ii) the neural substrates for action selection in the vertebrate brain, (iii) the role of perceptual selection in decision-making, and (iv) the interaction of group and individual action selection. A second aim of this issue is to advance methodological practice with respect to modelling natural action section. A wide variety of computational modelling techniques are therefore employed ranging from formal mathematical approaches through to computational neuroscience, connectionism and agent-based modelling. The research described has broad implications for both natural and artificial sciences. One example, highlighted here, is its application to medical science where models of the neural substrates for action selection are contributing to the understanding of brain disorders such as Parkinson's disease, schizophrenia and attention deficit/hyperactivity disorder.

  20. Selective hydrogenation processes in steam cracking

    Energy Technology Data Exchange (ETDEWEB)

    Bender, M.; Schroeter, M.K.; Hinrichs, M.; Makarczyk, P. [BASF SE, Ludwigshafen (Germany)

    2010-12-30

    Hydrogen is the key elixir used to trim the quality of olefinic and aromatic product slates from steam crackers. Being co-produced in excess amounts in the thermal cracking process a small part of the hydrogen is consumed in the ''cold part'' of a steam cracker to selectively hydrogenate unwanted, unsaturated hydrocarbons. The compositions of the various steam cracker product streams are adjusted by these processes to the outlet specifications. This presentation gives an overview over state-of-art selective hydrogenation technologies available from BASF for these processes. (Published in summary form only) (orig.)

  1. Process modeling style

    CERN Document Server

    Long, John

    2014-01-01

    Process Modeling Style focuses on other aspects of process modeling beyond notation that are very important to practitioners. Many people who model processes focus on the specific notation used to create their drawings. While that is important, there are many other aspects to modeling, such as naming, creating identifiers, descriptions, interfaces, patterns, and creating useful process documentation. Experience author John Long focuses on those non-notational aspects of modeling, which practitioners will find invaluable. Gives solid advice for creating roles, work produ

  2. PRIME – PRocess modelling in ImpleMEntation research: selecting a theoretical basis for interventions to change clinical practice

    Directory of Open Access Journals (Sweden)

    Pitts Nigel

    2003-12-01

    modelling. In the final phase of the project, the findings from all surveys will be analysed simultaneously adopting a random effects approach to investigate whether the relationships between predictor variables and outcome measures are modified by behaviour, professional group or geographical location.

  3. Product and Process Modelling

    DEFF Research Database (Denmark)

    Cameron, Ian T.; Gani, Rafiqul

    This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models. These ...

  4. Material and process selection using product examples

    DEFF Research Database (Denmark)

    Lenau, Torben Anker

    2001-01-01

    The objective of the paper is to suggest a different procedure for selecting materials and processes within the product development work. The procedure includes using product examples in order to increase the number of alternative materials and processes that is considered. Product examples can...... communicate information about materials and processes in a very concentrated and effective way. The product examples represent desired material properties but also includes information that can not be associated directly to the material, e.g. functional or perceived attributes. Previous studies suggest...... that designers often limit their selection of materials and processes to a few well-known ones. Designers need to expand the solution space by considering more materials and processes. But they have to be convinced that the materials and processes are likely candidates that are worth investing time in exploring...

  5. Multi-enzyme Process Modeling

    DEFF Research Database (Denmark)

    Andrade Santacoloma, Paloma de Gracia

    . In this way the model parameters that drives the main dynamic behavior can be identified and thus a better understanding of this type of processes. In order to develop, test and verify the methodology, three case studies were selected, specifically the bi-enzyme process for the production of lactobionic acid......The subject of this thesis is to develop a methodological framework that can systematically guide mathematical model building for better understanding of multi-enzyme processes. In this way, opportunities for process improvements can be identified by analyzing simulations of either existing...... in the scientific literature. Reliable mathematical models of such multi-catalytic schemes can exploit the potential benefit of these processes. In this way, the best outcome of the process can be obtained understanding the types of modification that are required for process optimization. An effective evaluation...

  6. Product and Process Modelling

    DEFF Research Database (Denmark)

    Cameron, Ian T.; Gani, Rafiqul

    This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models....... These approaches are put into the context of life cycle modelling, where multiscale and multiform modelling is increasingly prevalent in the 21st century. The book commences with a discussion of modern product and process modelling theory and practice followed by a series of case studies drawn from a variety...... to biotechnology applications, food, polymer and human health application areas. The book highlights to important nature of modern product and process modelling in the decision making processes across the life cycle. As such it provides an important resource for students, researchers and industrial practitioners....

  7. Selective visual attention in object detection processes

    Science.gov (United States)

    Paletta, Lucas; Goyal, Anurag; Greindl, Christian

    2003-03-01

    Object detection is an enabling technology that plays a key role in many application areas, such as content based media retrieval. Attentive cognitive vision systems are here proposed where the focus of attention is directed towards the most relevant target. The most promising information is interpreted in a sequential process that dynamically makes use of knowledge and that enables spatial reasoning on the local object information. The presented work proposes an innovative application of attention mechanisms for object detection which is most general in its understanding of information and action selection. The attentive detection system uses a cascade of increasingly complex classifiers for the stepwise identification of regions of interest (ROIs) and recursively refined object hypotheses. While the most coarse classifiers are used to determine first approximations on a region of interest in the input image, more complex classifiers are used for more refined ROIs to give more confident estimates. Objects are modelled by local appearance based representations and in terms of posterior distributions of the object samples in eigenspace. The discrimination function to discern between objects is modeled by a radial basis functions (RBF) network that has been compared with alternative networks and been proved consistent and superior to other artifical neural networks for appearance based object recognition. The experiments were led for the automatic detection of brand objects in Formula One broadcasts within the European Commission's cognitive vision project DETECT.

  8. Standard Model processes

    CERN Document Server

    Mangano, M.L.; Aguilar Saavedra, J.A.; Alekhin, S.; Badger, S.; Bauer, C.W.; Becher, T.; Bertone, V.; Bonvini, M.; Boselli, S.; Bothmann, E.; Boughezal, R.; Cacciari, M.; Carloni Calame, C.M.; Caola, F.; Campbell, J.M.; Carrazza, S.; Chiesa, M.; Cieri, L.; Cimaglia, F.; Febres Cordero, F.; Ferrarese, P.; D'Enterria, D.; Ferrera, G.; Garcia i Tormo, X.; Garzelli, M.V.; Germann, E.; Hirschi, V.; Han, T.; Ita, H.; Jäger, B.; Kallweit, S.; Karlberg, A.; Kuttimalai, S.; Krauss, F.; Larkoski, A.J.; Lindert, J.; Luisoni, G.; Maierhöfer, P.; Mattelaer, O.; Martinez, H.; Moch, S.; Montagna, G.; Moretti, M.; Nason, P.; Nicrosini, O.; Oleari, C.; Pagani, D.; Papaefstathiou, A.; Petriello, F.; Piccinini, F.; Pierini, M.; Pierog, T.; Pozzorini, S.; Re, E.; Robens, T.; Rojo, J.; Ruiz, R.; Sakurai, K.; Salam, G.P.; Salfelder, L.; Schönherr, M.; Schulze, M.; Schumann, S.; Selvaggi, M.; Shivaji, A.; Siodmok, A.; Skands, P.; Torrielli, P.; Tramontano, F.; Tsinikos, I.; Tweedie, B.; Vicini, A.; Westhoff, S.; Zaro, M.; Zeppenfeld, D.; CERN. Geneva. ATS Department

    2017-06-22

    This report summarises the properties of Standard Model processes at the 100 TeV pp collider. We document the production rates and typical distributions for a number of benchmark Standard Model processes, and discuss new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  9. Business Process Modeling: Blueprinting

    OpenAIRE

    Al-Fedaghi, Sabah

    2017-01-01

    This paper presents a flow-based methodology for capturing processes specified in business process modeling. The proposed methodology is demonstrated through re-modeling of an IBM Blueworks case study. While the Blueworks approach offers a well-proven tool in the field, this should not discourage workers from exploring other ways of thinking about effectively capturing processes. The diagrammatic representation presented here demonstrates a viable methodology in this context. It is hoped this...

  10. The Process of Marketing Segmentation Strategy Selection

    OpenAIRE

    Ionel Dumitru

    2007-01-01

    The process of marketing segmentation strategy selection represents the essence of strategical marketing. We present hereinafter the main forms of the marketing statategy segmentation: undifferentiated marketing, differentiated marketing, concentrated marketing and personalized marketing. In practice, the companies use a mix of these marketing segmentation methods in order to maximize the proffit and to satisfy the consumers’ needs.

  11. The Process of Marketing Segmentation Strategy Selection

    OpenAIRE

    Ionel Dumitru

    2007-01-01

    The process of marketing segmentation strategy selection represents the essence of strategical marketing. We present hereinafter the main forms of the marketing statategy segmentation: undifferentiated marketing, differentiated marketing, concentrated marketing and personalized marketing. In practice, the companies use a mix of these marketing segmentation methods in order to maximize the proffit and to satisfy the consumers’ needs.

  12. Model Selection for Pion Photoproduction

    CERN Document Server

    Landay, J; Fernández-Ramírez, C; Hu, B; Molina, R

    2016-01-01

    Partial-wave analysis of meson and photon-induced reactions is needed to enable the comparison of many theoretical approaches to data. In both energy-dependent and independent parametrizations of partial waves, the selection of the model amplitude is crucial. Principles of the $S$-matrix are implemented to different degree in different approaches, but a many times overlooked aspect concerns the selection of undetermined coefficients and functional forms for fitting, leading to a minimal yet sufficient parametrization. We present an analysis of low-energy neutral pion photoproduction using the Least Absolute Shrinkage and Selection Operator (LASSO) in combination with criteria from information theory and $K$-fold cross validation. These methods are not yet widely known in the analysis of excited hadrons but will become relevant in the era of precision spectroscopy. The principle is first illustrated with synthetic data, then, its feasibility for real data is demonstrated by analyzing the latest available measu...

  13. WWTP Process Tank Modelling

    DEFF Research Database (Denmark)

    Laursen, Jesper

    hydrofoil shaped propellers. These two sub-processes deliver the main part of the supplied energy to the activated sludge tank, and for this reason they are important for the mixing conditions in the tank. For other important processes occurring in the activated sludge tank, existing models and measurements...

  14. Adaptive Covariance Estimation with model selection

    CERN Document Server

    Biscay, Rolando; Loubes, Jean-Michel

    2012-01-01

    We provide in this paper a fully adaptive penalized procedure to select a covariance among a collection of models observing i.i.d replications of the process at fixed observation points. For this we generalize previous results of Bigot and al. and propose to use a data driven penalty to obtain an oracle inequality for the estimator. We prove that this method is an extension to the matricial regression model of the work by Baraud.

  15. Selective effects of nicotine on attentional processes.

    Science.gov (United States)

    Mancuso, G; Warburton, D M; Mélen, M; Sherwood, N; Tirelli, E

    1999-09-01

    It is now well established from electrophysiological and behavioural evidence that nicotine has effects on information processing. The results are usually explained either by a primary effect of nicotine or by a reversal effect of a nicotine-induced, abstinence deficit. In addition, there is dispute about the cognitive processes underlying the changes in performance. This study has approached the first question by using the nicotine patch, in order to administer nicotine chronically. In addition, we examined the effects of nicotine on attention with a selection of tests which assessed the intensity and selectivity features of attention, using the Random Letter Generation test, the Flexibility of Attention test and the Stroop test. Nicotine enhanced the speed of number generation and the speed of processing in both the control and interference conditions of the Stroop test. There were no effects on attentional switching of the Flexibility of Attention test. The results are consistent with the hypothesis that nicotine mainly improves the intensity feature of attention, rather than the selectivity feature.

  16. Biosphere Process Model Report

    Energy Technology Data Exchange (ETDEWEB)

    J. Schmitt

    2000-05-25

    To evaluate the postclosure performance of a potential monitored geologic repository at Yucca Mountain, a Total System Performance Assessment (TSPA) will be conducted. Nine Process Model Reports (PMRs), including this document, are being developed to summarize the technical basis for each of the process models supporting the TSPA model. These reports cover the following areas: (1) Integrated Site Model; (2) Unsaturated Zone Flow and Transport; (3) Near Field Environment; (4) Engineered Barrier System Degradation, Flow, and Transport; (5) Waste Package Degradation; (6) Waste Form Degradation; (7) Saturated Zone Flow and Transport; (8) Biosphere; and (9) Disruptive Events. Analysis/Model Reports (AMRs) contain the more detailed technical information used to support TSPA and the PMRs. The AMRs consists of data, analyses, models, software, and supporting documentation that will be used to defend the applicability of each process model for evaluating the postclosure performance of the potential Yucca Mountain repository system. This documentation will ensure the traceability of information from its source through its ultimate use in the TSPA-Site Recommendation (SR) and in the National Environmental Policy Act (NEPA) analysis processes. The objective of the Biosphere PMR is to summarize (1) the development of the biosphere model, and (2) the Biosphere Dose Conversion Factors (BDCFs) developed for use in TSPA. The Biosphere PMR does not present or summarize estimates of potential radiation doses to human receptors. Dose calculations are performed as part of TSPA and will be presented in the TSPA documentation. The biosphere model is a component of the process to evaluate postclosure repository performance and regulatory compliance for a potential monitored geologic repository at Yucca Mountain, Nevada. The biosphere model describes those exposure pathways in the biosphere by which radionuclides released from a potential repository could reach a human receptor

  17. Entropic criterion for model selection

    Science.gov (United States)

    Tseng, Chih-Yuan

    2006-10-01

    Model or variable selection is usually achieved through ranking models according to the increasing order of preference. One of methods is applying Kullback-Leibler distance or relative entropy as a selection criterion. Yet that will raise two questions, why use this criterion and are there any other criteria. Besides, conventional approaches require a reference prior, which is usually difficult to get. Following the logic of inductive inference proposed by Caticha [Relative entropy and inductive inference, in: G. Erickson, Y. Zhai (Eds.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP Conference Proceedings, vol. 707, 2004 (available from arXiv.org/abs/physics/0311093)], we show relative entropy to be a unique criterion, which requires no prior information and can be applied to different fields. We examine this criterion by considering a physical problem, simple fluids, and results are promising.

  18. A Selective Review of Group Selection in High Dimensional Models

    CERN Document Server

    Huang, Jian; Ma, Shuangge

    2012-01-01

    Grouping structures arise naturally in many statistical modeling problems. Several methods have been proposed for variable selection that respect grouping structure in variables. Examples include the group LASSO and several concave group selection methods. In this article, we give a selective review of group selection concerning methodological developments, theoretical properties, and computational algorithms. We pay particular attention to group selection methods involving concave penalties. We address both group selection and bi-level selection methods. We describe several applications of these methods in nonparametric additive models, semiparametric regression, seemingly unrelated regressions, genomic data analysis and genome wide association studies. We also highlight some issues that require further study.

  19. Supplier Selection in Dynamic Environment using Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Prince Agarwal

    2014-08-01

    Full Text Available In today’s highly competitive business environment, the rapidly changing customer demands and with the advent of enterprise wide information systems, the managers are bound to think beyond the conventional business processes and devise new ways to squeeze out costs and improve the performance without compromising on the quality at the same time. Supplier evaluation and selection is one such area which determines the success of any manufacturing firm. Supplier selection is the problem wherein the company decides which vendor to select to have that strategic and operational advantage of meeting the customers’ varying demands and fight the fierce competition. This paper presents a simple model based on Analytic Hierarchy Process (AHP to help decision makers in supplier evaluation and selection, taking into account the firm’s requirements. The article is intended to help new scholars and researchers understand the AHP model and see different facets in first sight.

  20. Selection of Fuel by Using Analytical Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Asilata M. Damle,

    2015-04-01

    Full Text Available Selection of fuel is a very important and critical decision one has to make. Various criteria are to be considered while selecting a fuel. Some of important criteria are Fuel Economy, Availability of fuel, Pollution from vehicle, Maintenance of the vehicle. Selection of best fuel is a complex situation. It needs a multi-criteria analysis. Earlier, the solution to the problem were found by applying classical numerical methods which took into account only technical and economic merits of the various alternatives. By applying multi-criteria tools, it is possible to obtain more realistic results. This paper gives a systematic analysis for selection of fuel by using Analytical Hierarchy Process (AHP. This is a multi-criteria decision making process. By using AHP we can select the fuel by comparing various factors in a mathematical model. This is a scientific method to find out the best fuel by making pairwise comparisons.

  1. Subcontractor Selection Using Analytic Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Vesile Sinem Arikan Kargi

    2012-07-01

    Full Text Available Turkish textile firms work under a heavily competitive atmosphere in terms of prices due to globalization. Firms have to take into consideration several criteria like cost, quality and delivery-on-time in order to survive the global market conditions and to maintain profitability. To meet these criteria, contractor companies have to select the best subcontractor. Therefore, the selection of subcontractor for the contractor company is a problem. The aim of this study is to solve the problem of Yeşim Textile, a contractor company, about the selection of the best subcontractor for its customer Nike. To solve the problem, firstly, the main criteria and relevant sub-criteria, which are of importance for Yeşim and Nike, were defined. Then, authorities from the firms were interviewed in order to formulate pairwise comparison matrices using the Saaty’s importance scale. In a sense, this matrix is the model of this study. The model, named AHP, was analyzed using the Expert Choice software. Best subcontractors for Yeşim were determined based on the model results. In addition, these results were analyzed for the firm’s decision makers.

  2. Model Checking of Boolean Process Models

    CERN Document Server

    Schneider, Christoph

    2011-01-01

    In the field of Business Process Management formal models for the control flow of business processes have been designed since more than 15 years. Which methods are best suited to verify the bulk of these models? The first step is to select a formal language which fixes the semantics of the models. We adopt the language of Boolean systems as reference language for Boolean process models. Boolean systems form a simple subclass of coloured Petri nets. Their characteristics are low tokens to model explicitly states with a subsequent skipping of activations and arbitrary logical rules of type AND, XOR, OR etc. to model the split and join of the control flow. We apply model checking as a verification method for the safeness and liveness of Boolean systems. Model checking of Boolean systems uses the elementary theory of propositional logic, no modal operators are needed. Our verification builds on a finite complete prefix of a certain T-system attached to the Boolean system. It splits the processes of the Boolean sy...

  3. Foam process models.

    Energy Technology Data Exchange (ETDEWEB)

    Moffat, Harry K.; Noble, David R.; Baer, Thomas A. (Procter & Gamble Co., West Chester, OH); Adolf, Douglas Brian; Rao, Rekha Ranjana; Mondy, Lisa Ann

    2008-09-01

    In this report, we summarize our work on developing a production level foam processing computational model suitable for predicting the self-expansion of foam in complex geometries. The model is based on a finite element representation of the equations of motion, with the movement of the free surface represented using the level set method, and has been implemented in SIERRA/ARIA. An empirically based time- and temperature-dependent density model is used to encapsulate the complex physics of foam nucleation and growth in a numerically tractable model. The change in density with time is at the heart of the foam self-expansion as it creates the motion of the foam. This continuum-level model uses an homogenized description of foam, which does not include the gas explicitly. Results from the model are compared to temperature-instrumented flow visualization experiments giving the location of the foam front as a function of time for our EFAR model system.

  4. Refactoring Process Models in Large Process Repositories.

    NARCIS (Netherlands)

    Weber, B.; Reichert, M.U.

    2008-01-01

    With the increasing adoption of process-aware information systems (PAIS), large process model repositories have emerged. Over time respective models have to be re-aligned to the real-world business processes through customization or adaptation. This bears the risk that model redundancies are introdu

  5. Refactoring Process Models in Large Process Repositories.

    NARCIS (Netherlands)

    Weber, B.; Reichert, M.U.

    With the increasing adoption of process-aware information systems (PAIS), large process model repositories have emerged. Over time respective models have to be re-aligned to the real-world business processes through customization or adaptation. This bears the risk that model redundancies are

  6. Processing, testing and selecting blood components.

    Science.gov (United States)

    Jones, Alister; Heyes, Jennifer

    Transfusion of blood components can be an essential and lifesaving treatment for many patients. However, components must comply with a number of national requirements to ensure they are safe and fit for use. Transfusion of incorrect blood components can lead to mortality and morbidity in patients, which is why patient testing and blood selection are important. This second article in our five-part series on blood transfusion outlines the requirements for different blood components, the importance of the ABO and RhD blood group systems and the processes that ensure the correct blood component is issued to each patient.

  7. Selected papers on noise and stochastic processes

    CERN Document Server

    Wax, Nelson

    1954-01-01

    Six classic papers on stochastic process, selected to meet the needs of physicists, applied mathematicians, and engineers. Contents: 1.Chandrasekhar, S.: Stochastic Problems in Physics and Astronomy. 2. Uhlenbeck, G. E. and Ornstein, L. S.: On the Theory of the Browninan Motion. 3. Ming Chen Wang and Uhlenbeck, G. E.: On the Theory of the Browninan Motion II. 4. Rice, S. O.: Mathematical Analysis of Random Noise. 5. Kac, Mark: Random Walk and the Theory of Brownian Motion. 6. Doob, J. L.: The Brownian Movement and Stochastic Equations. Unabridged republication of the Dover reprint (1954). Pre

  8. Evidence accumulation as a model for lexical selection

    NARCIS (Netherlands)

    Anders, R.; Riès, S.; van Maanen, L.; Alario, F.-X.

    2015-01-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of

  9. Model selection for pion photoproduction

    Science.gov (United States)

    Landay, J.; Döring, M.; Fernández-Ramírez, C.; Hu, B.; Molina, R.

    2017-01-01

    Partial-wave analysis of meson and photon-induced reactions is needed to enable the comparison of many theoretical approaches to data. In both energy-dependent and independent parametrizations of partial waves, the selection of the model amplitude is crucial. Principles of the S matrix are implemented to a different degree in different approaches; but a many times overlooked aspect concerns the selection of undetermined coefficients and functional forms for fitting, leading to a minimal yet sufficient parametrization. We present an analysis of low-energy neutral pion photoproduction using the least absolute shrinkage and selection operator (LASSO) in combination with criteria from information theory and K -fold cross validation. These methods are not yet widely known in the analysis of excited hadrons but will become relevant in the era of precision spectroscopy. The principle is first illustrated with synthetic data; then, its feasibility for real data is demonstrated by analyzing the latest available measurements of differential cross sections (d σ /d Ω ), photon-beam asymmetries (Σ ), and target asymmetry differential cross sections (d σT/d ≡T d σ /d Ω ) in the low-energy regime.

  10. Intermediate product selection and blending in the food processing industry

    DEFF Research Database (Denmark)

    Kilic, Onur A.; Akkerman, Renzo; van Donk, Dirk Pieter

    2013-01-01

    This study addresses a capacitated intermediate product selection and blending problem typical for two-stage production systems in the food processing industry. The problem involves the selection of a set of intermediates and end-product recipes characterising how those selected intermediates...... are blended into end products to minimise the total operational costs under production and storage capacity limitations. A comprehensive mixed-integer linear model is developed for the problem. The model is applied on a data set collected from a real-life case. The trade-offs between capacity limitations...... and operational costs are analysed, and the effects of different types of cost parameters and capacity limitations on the selection of intermediates and end-product recipes are investigated....

  11. The Ouroboros Model, selected facets.

    Science.gov (United States)

    Thomsen, Knud

    2011-01-01

    The Ouroboros Model features a biologically inspired cognitive architecture. At its core lies a self-referential recursive process with alternating phases of data acquisition and evaluation. Memory entries are organized in schemata. The activation at a time of part of a schema biases the whole structure and, in particular, missing features, thus triggering expectations. An iterative recursive monitor process termed 'consumption analysis' is then checking how well such expectations fit with successive activations. Mismatches between anticipations based on previous experience and actual current data are highlighted and used for controlling the allocation of attention. A measure for the goodness of fit provides feedback as (self-) monitoring signal. The basic algorithm works for goal directed movements and memory search as well as during abstract reasoning. It is sketched how the Ouroboros Model can shed light on characteristics of human behavior including attention, emotions, priming, masking, learning, sleep and consciousness.

  12. Model selection for amplitude analysis

    CERN Document Server

    Guegan, Baptiste; Stevens, Justin; Williams, Mike

    2015-01-01

    Model complexity in amplitude analyses is often a priori under-constrained since the underlying theory permits a large number of amplitudes to contribute to most physical processes. The use of an overly complex model results in reduced predictive power and worse resolution on unknown parameters of interest. Therefore, it is common to reduce the complexity by removing from consideration some subset of the allowed amplitudes. This paper studies a data-driven method for limiting model complexity through regularization during regression in the context of a multivariate (Dalitz-plot) analysis. The regularization technique applied greatly improves the performance. A method is also proposed for obtaining the significance of a resonance in a multivariate amplitude analysis.

  13. Simultaneous data pre-processing and SVM classification model selection based on a parallel genetic algorithm applied to spectroscopic data of olive oils.

    Science.gov (United States)

    Devos, Olivier; Downey, Gerard; Duponchel, Ludovic

    2014-04-01

    Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates.

  14. Processing metallic glasses by selective laser melting

    Directory of Open Access Journals (Sweden)

    Simon Pauly

    2013-01-01

    Full Text Available Metallic glasses and their descendants, the so-called bulk metallic glasses (BMGs, can be regarded as frozen liquids with a high resistance to crystallization. The lack of a conventional structure turns them into a material exhibiting near-theoretical strength, low Young's modulus and large elasticity. These unique mechanical properties can be only obtained when the metallic melts are rapidly cooled to bypass the nucleation and growth of crystals. Most of the commonly known and used processing routes, such as casting, melt spinning or gas atomization, have intrinsic limitations regarding the complexity and dimensions of the geometries. Here, it is shown that selective laser melting (SLM, which is usually used to process conventional metallic alloys and polymers, can be applied to implement complex geometries and components from an Fe-base metallic glass. This approach is in principle viable for a large variety of metallic alloys and paves the way for the novel synthesis of materials and the development of parts with advanced functional and structural properties without limitations in size and intricacy.

  15. Behavioral optimization models for multicriteria portfolio selection

    Directory of Open Access Journals (Sweden)

    Mehlawat Mukesh Kumar

    2013-01-01

    Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.

  16. Model selection and comparison for independents sinusoids

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2014-01-01

    this method by considering the problem in a full Bayesian framework instead of the approximate formulation, on which the asymptotic MAP criterion is based. This leads to a new model selection and comparison method, the lp-BIC, whose computational complexity is of the same order as the asymptotic MAP criterion......In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve....... Through simulations, we demonstrate that the lp-BIC outperforms the asymptotic MAP criterion and other state of the art methods in terms of model selection, de-noising and prediction performance. The simulation code is available online....

  17. Predictive Active Set Selection Methods for Gaussian Processes

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2012-01-01

    We propose an active set selection framework for Gaussian process classification for cases when the dataset is large enough to render its inference prohibitive. Our scheme consists of a two step alternating procedure of active set update rules and hyperparameter optimization based upon marginal...... likelihood maximization. The active set update rules rely on the ability of the predictive distributions of a Gaussian process classifier to estimate the relative contribution of a datapoint when being either included or removed from the model. This means that we can use it to include points with potentially...... high impact to the classifier decision process while removing those that are less relevant. We introduce two active set rules based on different criteria, the first one prefers a model with interpretable active set parameters whereas the second puts computational complexity first, thus a model...

  18. Fundamental Aspects of Selective Melting Additive Manufacturing Processes

    Energy Technology Data Exchange (ETDEWEB)

    van Swol, Frank B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miller, James E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-12-01

    Certain details of the additive manufacturing process known as selective laser melting (SLM) affect the performance of the final metal part. To unleash the full potential of SLM it is crucial that the process engineer in the field receives guidance about how to select values for a multitude of process variables employed in the building process. These include, for example, the type of powder (e.g., size distribution, shape, type of alloy), orientation of the build axis, the beam scan rate, the beam power density, the scan pattern and scan rate. The science-based selection of these settings con- stitutes an intrinsically challenging multi-physics problem involving heating and melting a metal alloy, reactive, dynamic wetting followed by re-solidification. In addition, inherent to the process is its considerable variability that stems from the powder packing. Each time a limited number of powder particles are placed, the stacking is intrinsically different from the previous, possessing a different geometry, and having a different set of contact areas with the surrounding particles. As a result, even if all other process parameters (scan rate, etc) are exactly the same, the shape and contact geometry and area of the final melt pool will be unique to that particular configuration. This report identifies the most important issues facing SLM, discusses the fundamental physics associated with it and points out how modeling can support the additive manufacturing efforts.

  19. Modular process modeling for OPC

    Science.gov (United States)

    Keck, M. C.; Bodendorf, C.; Schmidtling, T.; Schlief, R.; Wildfeuer, R.; Zumpe, S.; Niehoff, M.

    2007-03-01

    Modular OPC modeling, describing mask, optics, resist and etch processes separately is an approach to keep efforts for OPC manageable. By exchanging single modules of a modular OPC model, a fast response to process changes during process development is possible. At the same time efforts can be reduced, since only single modular process steps have to be re-characterized as input for OPC modeling as the process is adjusted and optimized. Commercially available OPC tools for full chip processing typically make use of semi-empirical models. The goal of our work is to investigate to what extent these OPC tools can be applied for modeling of single process steps as separate modules. For an advanced gate level process we analyze the modeling accuracy over different process conditions (focus and dose) when combining models for each process step - optics, resist and etch - for differing single processes to a model describing the total process.

  20. Appropriate model selection methods for nonstationary generalized extreme value models

    Science.gov (United States)

    Kim, Hanbeen; Kim, Sooyoung; Shin, Hongjoon; Heo, Jun-Haeng

    2017-04-01

    Several evidences of hydrologic data series being nonstationary in nature have been found to date. This has resulted in the conduct of many studies in the area of nonstationary frequency analysis. Nonstationary probability distribution models involve parameters that vary over time. Therefore, it is not a straightforward process to apply conventional goodness-of-fit tests to the selection of an appropriate nonstationary probability distribution model. Tests that are generally recommended for such a selection include the Akaike's information criterion (AIC), corrected Akaike's information criterion (AICc), Bayesian information criterion (BIC), and likelihood ratio test (LRT). In this study, the Monte Carlo simulation was performed to compare the performances of these four tests, with regard to nonstationary as well as stationary generalized extreme value (GEV) distributions. Proper model selection ratios and sample sizes were taken into account to evaluate the performances of all the four tests. The BIC demonstrated the best performance with regard to stationary GEV models. In case of nonstationary GEV models, the AIC proved to be better than the other three methods, when relatively small sample sizes were considered. With larger sample sizes, the AIC, BIC, and LRT presented the best performances for GEV models which have nonstationary location and/or scale parameters, respectively. Simulation results were then evaluated by applying all four tests to annual maximum rainfall data of selected sites, as observed by the Korea Meteorological Administration.

  1. A Neurodynamical Model for Selective Visual Attention

    Institute of Scientific and Technical Information of China (English)

    QU Jing-Yi; WANG Ru-Bin; ZHANG Yuan; DU Ying

    2011-01-01

    attention is a traditional problem in computer vision and robotics.A number of investigations have been made to clarify how the problem of object selection and segmentation is solved by the brain.[1] Niebur and Koch have presented a model for the experimental data recorded from the striate and extrastriate areas of the neocortex.[2] Corchs and Deco developed a model of visual conjunction-feature search where attention bias was modulated by top-down signals,from memory that coded target feature values to feature processing structures in the primary areas of the visual cortex.[3

  2. Mathematical modeling of biological processes

    CERN Document Server

    Friedman, Avner

    2014-01-01

    This book on mathematical modeling of biological processes includes a wide selection of biological topics that demonstrate the power of mathematics and computational codes in setting up biological processes with a rigorous and predictive framework. Topics include: enzyme dynamics, spread of disease, harvesting bacteria, competition among live species, neuronal oscillations, transport of neurofilaments in axon, cancer and cancer therapy, and granulomas. Complete with a description of the biological background and biological question that requires the use of mathematics, this book is developed for graduate students and advanced undergraduate students with only basic knowledge of ordinary differential equations and partial differential equations; background in biology is not required. Students will gain knowledge on how to program with MATLAB without previous programming experience and how to use codes in order to test biological hypothesis.

  3. Selection of Technical Routes for Resid Processing

    Institute of Scientific and Technical Information of China (English)

    Hu Weiqing

    2006-01-01

    With the increasing trend of heavy crudes supply with deteriorated quality and demand for clean fuels, deep processing of residuum, in particular the processing of low-grade resid, has become the main source for enhancing economic benefits of oil refiners. This article has discussed the technology for processing of different resids and the advantages and disadvantages of the combination processes for resid processing, while pinpointing the directions for development and application of technologies for resid processing in China.

  4. Bayesian Constrained-Model Selection for Factor Analytic Modeling

    OpenAIRE

    Peeters, Carel F.W.

    2016-01-01

    My dissertation revolves around Bayesian approaches towards constrained statistical inference in the factor analysis (FA) model. Two interconnected types of restricted-model selection are considered. These types have a natural connection to selection problems in the exploratory FA (EFA) and confirmatory FA (CFA) model and are termed Type I and Type II model selection. Type I constrained-model selection is taken to mean the determination of the appropriate dimensionality of a model. This type ...

  5. Theory of Selection Operators on Hyperspaces and Multivalued Stochastic Processes

    Institute of Scientific and Technical Information of China (English)

    高勇; 张文修

    1994-01-01

    In this paper, a new concept of selection operators on hyperspaces (subsets spaces) is introduced, and the existence theorems for several kinds of selection operators are proved. Using the methods of selection operators, we give a selection characterization of identically distributed multivalued random variables and completely solve the vector-valued selection problem for sequences of multivalued random variables converging in distribution. The regular selections and Markov selections for multivalued stochastic processes are studied, and a discretization theorem for multivalued Markov processes is established. A theorem on the asymptotic martingale selections for compact and convex multivalued asymptotic martingale is proved.

  6. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  7. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  8. Animal models and conserved processes

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2012-09-01

    Full Text Available Abstract Background The concept of conserved processes presents unique opportunities for using nonhuman animal models in biomedical research. However, the concept must be examined in the context that humans and nonhuman animals are evolved, complex, adaptive systems. Given that nonhuman animals are examples of living systems that are differently complex from humans, what does the existence of a conserved gene or process imply for inter-species extrapolation? Methods We surveyed the literature including philosophy of science, biological complexity, conserved processes, evolutionary biology, comparative medicine, anti-neoplastic agents, inhalational anesthetics, and drug development journals in order to determine the value of nonhuman animal models when studying conserved processes. Results Evolution through natural selection has employed components and processes both to produce the same outcomes among species but also to generate different functions and traits. Many genes and processes are conserved, but new combinations of these processes or different regulation of the genes involved in these processes have resulted in unique organisms. Further, there is a hierarchy of organization in complex living systems. At some levels, the components are simple systems that can be analyzed by mathematics or the physical sciences, while at other levels the system cannot be fully analyzed by reducing it to a physical system. The study of complex living systems must alternate between focusing on the parts and examining the intact whole organism while taking into account the connections between the two. Systems biology aims for this holism. We examined the actions of inhalational anesthetic agents and anti-neoplastic agents in order to address what the characteristics of complex living systems imply for inter-species extrapolation of traits and responses related to conserved processes. Conclusion We conclude that even the presence of conserved processes is

  9. Model selection bias and Freedman's paradox

    Science.gov (United States)

    Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.

    2010-01-01

    In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.

  10. Selected Logistics Models and Techniques.

    Science.gov (United States)

    1984-09-01

    ACCESS PROCEDURE: On-Line System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease...System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease arrangement. • SPONSOR: ASD/ACCC

  11. On Activity modelling in process modeling

    Directory of Open Access Journals (Sweden)

    Dorel Aiordachioaie

    2001-12-01

    Full Text Available The paper is looking to the dynamic feature of the meta-models of the process modelling process, the time. Some principles are considered and discussed as main dimensions of any modelling activity: the compatibility of the substances, the equipresence of phenomena and the solvability of the model. The activity models are considered and represented at meta-level.

  12. Evaluation of the selective detachment process in flotation froth

    Energy Technology Data Exchange (ETDEWEB)

    Honaker, R.Q.; Ozsever, A.V. [University of Kentucky, Lexington, KY (United States). Dept. for Mining Engineering

    2003-10-01

    The improved selectivity between particles of varying degrees of hydrophobicity in flotation froths has been well documented in literature, especially in the deep froths utilized in flotation columns. The phenomenon is believed to be due to the selective detachment process whereby the least hydrophobic particles are released from the bubble surface upon bubble coalescence. To quantify the selective detachment process, column flotation experiments were performed under various operating conditions that provided varying amounts of reflux between the froth and collection zones. The flotation column incorporated the ability to provide instantaneous stoppage of the process streams and separation between the collection and froth zones after ensuring steady-state operation of the column. The samples collected from the two zones and process streams were evaluated to quantify the flotation rate distribution of the particles comprising each sample. The flotation rate was used as an indicator of the degree of hydrophobicity and thus a relative measure of the binding force between the particle and bubble in the froth zone. The flotation rate data was used as input into well known flotation models to obtain the froth zone recovery rate and the quantity of material that refluxes between the collection and froth zones.

  13. [The model of adaptive primary image processing].

    Science.gov (United States)

    Dudkin, K N; Mironov, S V; Dudkin, A K; Chikhman, V N

    1998-07-01

    A computer model of adaptive segmentation of the 2D visual objects was developed. Primary image descriptions are realised via spatial frequency filters and feature detectors performing as self-organised mechanisms. Simulation of the control processes related to attention, lateral, frequency-selective and cross-orientation inhibition, determines the adaptive image processing.

  14. 7 CFR 1469.6 - Enrollment criteria and selection process.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Enrollment criteria and selection process. 1469.6... General Provisions § 1469.6 Enrollment criteria and selection process. (a) Selection and funding of... land to degradation; (iv) State or national conservation and environmental issues e.g., location of...

  15. Creativity: Intuitive processing outperforms deliberative processing in creative idea selection

    NARCIS (Netherlands)

    Zhu, Y.; Ritter, S.M.; Müller, B.C.N.; Dijksterhuis, A.J.

    2017-01-01

    Creative ideas are highly valued, and various techniques have been designed to maximize the generation of creative ideas. However, for actual implementation of creative ideas, the most creative ideas must be recognized and selected from a pool of ideas. Although idea generation and idea selection ar

  16. Creativity: Intuitive processing outperforms deliberative processing in creative idea selection

    NARCIS (Netherlands)

    Zhu, Y.; Ritter, S.M.; Müller, B.C.N.; Dijksterhuis, A.J.

    2017-01-01

    Creative ideas are highly valued, and various techniques have been designed to maximize the generation of creative ideas. However, for actual implementation of creative ideas, the most creative ideas must be recognized and selected from a pool of ideas. Although idea generation and idea selection

  17. Auditory processing models

    DEFF Research Database (Denmark)

    Dau, Torsten

    2008-01-01

    The Handbook of Signal Processing in Acoustics will compile the techniques and applications of signal processing as they are used in the many varied areas of Acoustics. The Handbook will emphasize the interdisciplinary nature of signal processing in acoustics. Each Section of the Handbook will pr...

  18. MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS

    Energy Technology Data Exchange (ETDEWEB)

    Asensio Ramos, A.; Manso Sainz, R.; Martinez Gonzalez, M. J.; Socas-Navarro, H. [Instituto de Astrofisica de Canarias, E-38205, La Laguna, Tenerife (Spain); Viticchie, B. [ESA/ESTEC RSSD, Keplerlaan 1, 2200 AG Noordwijk (Netherlands); Orozco Suarez, D., E-mail: aasensio@iac.es [National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588 (Japan)

    2012-04-01

    Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.

  19. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built...

  20. The Optimal Selection for Restricted Linear Models with Average Estimator

    Directory of Open Access Journals (Sweden)

    Qichang Xie

    2014-01-01

    Full Text Available The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing a k-class generalized information criterion (k-GIC, which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.

  1. GREAT Process Modeller user manual

    OpenAIRE

    Rueda, Urko; España, Sergio; Ruiz, Marcela

    2015-01-01

    This report contains instructions to install, uninstall and use GREAT Process Modeller, a tool that supports Communication Analysis, a communication-oriented business process modelling method. GREAT allows creating communicative event diagrams (i.e. business process models), specifying message structures (which describe the messages associated to each communicative event), and automatically generating a class diagram (representing the data model of an information system that would support suc...

  2. INNOVATION PROCESS MODELLING

    Directory of Open Access Journals (Sweden)

    JANUSZ K. GRABARA

    2011-01-01

    Full Text Available Modelling phenomena in accordance with the structural approach enables one to simplify the observed relations and to present the classification grounds. An example may be a model of organisational structure identifying the logical relations between particular units and presenting the division of authority, work.

  3. Folklore and the College Selection Process Revisited

    Science.gov (United States)

    Caruso, Pete

    2012-01-01

    This paper is a response to Clinton F. Conrad's article, "Beyond the Folklore." Conrad's strategy for assessing undergraduate quality echoes the sentiments espoused by many admission and college counseling professionals over the years at various workshops for students and families that focus on navigating the process. As transcendent as the…

  4. Job Aiding/Training Decision Process Model

    Science.gov (United States)

    1992-09-01

    I[ -, . 1’, oo Ii AL-CR-i1992-0004 AD-A256 947lEE = IIEI ifl ll 1l I JOB AIDING/TRAINING DECISION PROCESS MODEL A R M John P. Zenyuh DTIC S Phillip C...March 1990 - April 1990 4. TITLE AND SUBTITLE S. FUNDING NUMBERS C - F33615-86-C-0545 Job Aiding/Training Decision Process Model PE - 62205F PR - 1121 6...Components to Process Model Decision and Selection Points ........... 32 13. Summary of Subject Recommendations for Aiding Approaches

  5. Titanium processing using selective laser sintering

    Science.gov (United States)

    Harlan, Nicole Renee

    1999-11-01

    A materials development workstation specifically designed to test high temperature metal and metal-matrix composites for direct selective laser sintering (SLS) was constructed. Using the workstation, a titanium-aluminum alloy was sintered into single layer coupons to demonstrate the feasibility of producing titanium components using direct SLS. A combination of low temperature indirect SLS and colloidal infiltration was used to create "partially-stabilized" zirconia molds for titanium casting. The base material, stabilized zirconia mixed with a copolymer, was laser sintered into the desired mold geometry. The copolymer was pyrolyzed and replaced by a zirconia precursor. The flexural strength and surface roughness of the SLS-produced casting molds were sufficient for titanium casting trials. A laser-scanned human femur was used as the basis for a mold design and technology demonstration. Titanium castings produced from SLS molds exhibited typical as-cast microstructures and an average surface roughness (Ra) of 8 mum.

  6. A Decision Model for Selecting Participants in Supply Chain

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In order to satisfy the rapid changing requirements of customers, enterprises must cooperate with each other to form supply chain. The first and the most important stage in the forming of supply chain is the selection of participants. The article proposes a two-staged decision model to select partners. The first stage is the inter company comparison in each business process to select highefficiency candidate based on inside variables. The next stage is to analyse the combination of different candidates in order to select the most perfect partners according to a goal-programming model.

  7. Using Card Games to Simulate the Process of Natural Selection

    Science.gov (United States)

    Grilliot, Matthew E.; Harden, Siegfried

    2014-01-01

    In 1858, Darwin published "On the Origin of Species by Means of Natural Selection." His explanation of evolution by natural selection has become the unifying theme of biology. We have found that many students do not fully comprehend the process of evolution by natural selection. We discuss a few simple games that incorporate hands-on…

  8. Solvent selection methodology for pharmaceutical processes: Solvent swap

    DEFF Research Database (Denmark)

    Papadakis, Emmanouil; Kumar Tula, Anjan; Gani, Rafiqul

    2016-01-01

    A method for the selection of appropriate solvents for the solvent swap task in pharmaceutical processes has been developed. This solvent swap method is based on the solvent selection method of Gani et al. (2006) and considers additional selection criteria such as boiling point difference, volati...

  9. Intermediate product selection and blending in the food processing industry

    DEFF Research Database (Denmark)

    Kilic, Onur A.; Akkerman, Renzo; van Donk, Dirk Pieter

    2013-01-01

    This study addresses a capacitated intermediate product selection and blending problem typical for two-stage production systems in the food processing industry. The problem involves the selection of a set of intermediates and end-product recipes characterising how those selected intermediates...

  10. Process for selective grinding of coal

    Science.gov (United States)

    Venkatachari, Mukund K.; Benz, August D.; Huettenhain, Horst

    1991-01-01

    A process for preparing coal for use as a fuel. Forming a coal-water slurry having solid coal particles with a particle size not exceeding about 80 microns, transferring the coal-water slurry to a solid bowl centrifuge, and operating same to classify the ground coal-water slurry to provide a centrate containing solid particles with a particle size distribution of from about 5 microns to about 20 microns and a centrifuge cake of solids having a particle size distribution of from about 10 microns to about 80 microns. The classifer cake is reground and mixed with fresh feed to the solid bowl centrifuge for additional classification.

  11. BPMN Impact on Process Modeling

    OpenAIRE

    Polak, Przemyslaw

    2013-01-01

    Recent years have seen huge rise in popularity of BPMN in the area of business process modeling, especially among business analysts. This notation has characteristics that distinguish it significantly from the previously popular process modeling notations, such as EPC. The article contains the analysis of some important characteristics of BPMN and provides author’s conclusions on the impact that the popularity and specificity of BPMN can have on the practice of process modeling. Author's obse...

  12. Processing plant persistent strains of Listeria monocytogenes appear to have a lower virulence potential than clinical strains in selected virulence models

    DEFF Research Database (Denmark)

    Jensen, Anne; Thomsen, L.E.; Jørgensen, R.L.

    2008-01-01

    cell line, Caco-2; time to death in a nematode model, Caenorhabditis elegans and in a fruit fly model, Drosophila melanogaster and fecal shedding in a guinea pig model. All strains adhered to and grew in Caco-2 cells in similar levels. When exposed to 10(6) CFU/ml, two strains representing...

  13. Random Effect and Latent Variable Model Selection

    CERN Document Server

    Dunson, David B

    2008-01-01

    Presents various methods for accommodating model uncertainty in random effects and latent variable models. This book focuses on frequentist likelihood ratio and score tests for zero variance components. It also focuses on Bayesian methods for random effects selection in linear mixed effects and generalized linear mixed models

  14. Radiolysis Process Model

    Energy Technology Data Exchange (ETDEWEB)

    Buck, Edgar C.; Wittman, Richard S.; Skomurski, Frances N.; Cantrell, Kirk J.; McNamara, Bruce K.; Soderquist, Chuck Z.

    2012-07-17

    Assessing the performance of spent (used) nuclear fuel in geological repository requires quantification of time-dependent phenomena that may influence its behavior on a time-scale up to millions of years. A high-level waste repository environment will be a dynamic redox system because of the time-dependent generation of radiolytic oxidants and reductants and the corrosion of Fe-bearing canister materials. One major difference between used fuel and natural analogues, including unirradiated UO2, is the intense radiolytic field. The radiation emitted by used fuel can produce radiolysis products in the presence of water vapor or a thin-film of water (including OH• and H• radicals, O2-, eaq, H2O2, H2, and O2) that may increase the waste form degradation rate and change radionuclide behavior. H2O2 is the dominant oxidant for spent nuclear fuel in an O2 depleted water environment, the most sensitive parameters have been identified with respect to predictions of a radiolysis model under typical conditions. As compared with the full model with about 100 reactions it was found that only 30-40 of the reactions are required to determine [H2O2] to one part in 10–5 and to preserve most of the predictions for major species. This allows a systematic approach for model simplification and offers guidance in designing experiments for validation.

  15. Modeling of column apparatus processes

    CERN Document Server

    Boyadjiev, Christo; Boyadjiev, Boyan; Popova-Krumova, Petya

    2016-01-01

    This book presents a new approach for the modeling of chemical and interphase mass transfer processes in industrial column apparatuses, using convection-diffusion and average-concentration models. The convection-diffusion type models are used for a qualitative analysis of the processes and to assess the main, small and slight physical effects, and then reject the slight effects. As a result, the process mechanism can be identified. It also introduces average concentration models for quantitative analysis, which use the average values of the velocity and concentration over the cross-sectional area of the column. The new models are used to analyze different processes (simple and complex chemical reactions, absorption, adsorption and catalytic reactions), and make it possible to model the processes of gas purification with sulfur dioxide, which form the basis of several patents.

  16. The Added Value of the Project Selection Process

    Directory of Open Access Journals (Sweden)

    Adel Oueslati

    2016-06-01

    Full Text Available The project selection process comes in the first stage of the overall project management life cycle. It does have a very important impact on organization success. The present paper provides defi nitions of the basic concepts and tools related to the project selection process. It aims to stress the added value of this process for the entire organization success. The mastery of the project selection process is the right way for any organization to ensure that it will do the right project with the right resources at the right time and within the right priorities

  17. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, M.; Baker, N.A.; Duguid, J.O. [INTERA, Inc., Las Vegas, NV (United States)

    1994-04-04

    Since the 1960`s, ground-water flow models have been used for analysis of water resources problems. In the 1970`s, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970`s and well into the 1980`s focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M&O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing.

  18. Application of simulation models for the optimization of business processes

    Science.gov (United States)

    Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří

    2016-06-01

    The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.

  19. Model Identification of Integrated ARMA Processes

    Science.gov (United States)

    Stadnytska, Tetiana; Braun, Simone; Werner, Joachim

    2008-01-01

    This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…

  20. Model Identification of Integrated ARMA Processes

    Science.gov (United States)

    Stadnytska, Tetiana; Braun, Simone; Werner, Joachim

    2008-01-01

    This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…

  1. Modeling Software Processes and Artifacts

    NARCIS (Netherlands)

    van den Berg, Klaas; Bosch, Jan; Mitchell, Stuart

    1997-01-01

    The workshop on Modeling Software Processes and Artifacts explored the application of object technology in process modeling. After the introduction and the invited lecture, a number of participants presented their position papers. First, an overview is given on some background work, and the aims, as

  2. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  3. Selecting public relations personnel of hospitals by analytic network process.

    Science.gov (United States)

    Liao, Sen-Kuei; Chang, Kuei-Lun

    2009-01-01

    This study describes the use of analytic network process (ANP) in the Taiwanese hospital public relations personnel selection process. Starting with interviewing 48 practitioners and executives in north Taiwan, we collected selection criteria. Then, we retained the 12 critical criteria that were mentioned above 40 times by theses respondents, including: interpersonal skill, experience, negotiation, language, ability to follow orders, cognitive ability, adaptation to environment, adaptation to company, emotion, loyalty, attitude, and Response. Finally, we discussed with the 20 executives to take these important criteria into three perspectives to structure the hierarchy for hospital public relations personnel selection. After discussing with practitioners and executives, we find that selecting criteria are interrelated. The ANP, which incorporates interdependence relationships, is a new approach for multi-criteria decision-making. Thus, we apply ANP to select the most optimal public relations personnel of hospitals. An empirical study of public relations personnel selection problems in Taiwan hospitals is conducted to illustrate how the selection procedure works.

  4. A SUPPLIER SELECTION MODEL FOR SOFTWARE DEVELOPMENT OUTSOURCING

    Directory of Open Access Journals (Sweden)

    Hancu Lucian-Viorel

    2010-12-01

    Full Text Available This paper presents a multi-criteria decision making model used for supplier selection for software development outsourcing on e-marketplaces. This model can be used in auctions. The supplier selection process becomes complex and difficult on last twenty years since the Internet plays an important role in business management. Companies have to concentrate their efforts on their core activities and the others activities should be realized by outsourcing. They can achieve significant cost reduction by using e-marketplaces in their purchase process and by using decision support systems on supplier selection. In the literature were proposed many approaches for supplier evaluation and selection process. The performance of potential suppliers is evaluated using multi criteria decision making methods rather than considering a single factor cost.

  5. Business process modeling in healthcare.

    Science.gov (United States)

    Ruiz, Francisco; Garcia, Felix; Calahorra, Luis; Llorente, César; Gonçalves, Luis; Daniel, Christel; Blobel, Bernd

    2012-01-01

    The importance of the process point of view is not restricted to a specific enterprise sector. In the field of health, as a result of the nature of the service offered, health institutions' processes are also the basis for decision making which is focused on achieving their objective of providing quality medical assistance. In this chapter the application of business process modelling - using the Business Process Modelling Notation (BPMN) standard is described. Main challenges of business process modelling in healthcare are the definition of healthcare processes, the multi-disciplinary nature of healthcare, the flexibility and variability of the activities involved in health care processes, the need of interoperability between multiple information systems, and the continuous updating of scientific knowledge in healthcare.

  6. Selection of the NIR region for a regression model of the ethanol concentration in fermentation process by an online NIR and mid-IR dual-region spectrometer and 2D heterospectral correlation spectroscopy.

    Science.gov (United States)

    Nishii, Takashi; Genkawa, Takuma; Watari, Masahiro; Ozaki, Yukihiro

    2012-01-01

    A new selection procedure of an informative near-infrared (NIR) region for regression model building is proposed that uses an online NIR/mid-infrared (mid-IR) dual-region spectrometer in conjunction with two-dimensional (2D) NIR/mid-IR heterospectral correlation spectroscopy. In this procedure, both NIR and mid-IR spectra of a liquid sample are acquired sequentially during a reaction process using the NIR/mid-IR dual-region spectrometer; the 2D NIR/mid-IR heterospectral correlation spectrum is subsequently calculated from the obtained spectral data set. From the calculated 2D spectrum, a NIR region is selected that includes bands of high positive correlation intensity with mid-IR bands assigned to the analyte, and used for the construction of a regression model. To evaluate the performance of this procedure, a partial least-squares (PLS) regression model of the ethanol concentration in a fermentation process was constructed. During fermentation, NIR/mid-IR spectra in the 10000 - 1200 cm(-1) region were acquired every 3 min, and a 2D NIR/mid-IR heterospectral correlation spectrum was calculated to investigate the correlation intensity between the NIR and mid-IR bands. NIR regions that include bands at 4343, 4416, 5778, 5904, and 5955 cm(-1), which result from the combinations and overtones of the C-H group of ethanol, were selected for use in the PLS regression models, by taking the correlation intensity of a mid-IR band at 2985 cm(-1) arising from the CH(3) asymmetric stretching vibration mode of ethanol as a reference. The predicted results indicate that the ethanol concentrations calculated from the PLS regression models fit well to those obtained by high-performance liquid chromatography. Thus, it can be concluded that the selection procedure using the NIR/mid-IR dual-region spectrometer combined with 2D NIR/mid-IR heterospectral correlation spectroscopy is a powerful method for the construction of a reliable regression model.

  7. Modeling nuclear processes by Simulink

    Energy Technology Data Exchange (ETDEWEB)

    Rashid, Nahrul Khair Alang Md, E-mail: nahrul@iium.edu.my [Faculty of Engineering, International Islamic University Malaysia, Jalan Gombak, Selangor (Malaysia)

    2015-04-29

    Modelling and simulation are essential parts in the study of dynamic systems behaviours. In nuclear engineering, modelling and simulation are important to assess the expected results of an experiment before the actual experiment is conducted or in the design of nuclear facilities. In education, modelling can give insight into the dynamic of systems and processes. Most nuclear processes can be described by ordinary or partial differential equations. Efforts expended to solve the equations using analytical or numerical solutions consume time and distract attention from the objectives of modelling itself. This paper presents the use of Simulink, a MATLAB toolbox software that is widely used in control engineering, as a modelling platform for the study of nuclear processes including nuclear reactor behaviours. Starting from the describing equations, Simulink models for heat transfer, radionuclide decay process, delayed neutrons effect, reactor point kinetic equations with delayed neutron groups, and the effect of temperature feedback are used as examples.

  8. Modeling nuclear processes by Simulink

    Science.gov (United States)

    Rashid, Nahrul Khair Alang Md

    2015-04-01

    Modelling and simulation are essential parts in the study of dynamic systems behaviours. In nuclear engineering, modelling and simulation are important to assess the expected results of an experiment before the actual experiment is conducted or in the design of nuclear facilities. In education, modelling can give insight into the dynamic of systems and processes. Most nuclear processes can be described by ordinary or partial differential equations. Efforts expended to solve the equations using analytical or numerical solutions consume time and distract attention from the objectives of modelling itself. This paper presents the use of Simulink, a MATLAB toolbox software that is widely used in control engineering, as a modelling platform for the study of nuclear processes including nuclear reactor behaviours. Starting from the describing equations, Simulink models for heat transfer, radionuclide decay process, delayed neutrons effect, reactor point kinetic equations with delayed neutron groups, and the effect of temperature feedback are used as examples.

  9. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  10. Layerwise Monitoring of the Selective Laser Melting Process by Thermography

    Science.gov (United States)

    Krauss, Harald; Zeugner, Thomas; Zaeh, Michael F.

    Selective Laser Melting is utilized to build parts directly from CAD data. In this study layerwise monitoring of the temperature distribution is used to gather information about the process stability and the resulting part quality. The heat distribution varies with different kinds of parameters including scan vector length, laser power, layer thickness and inter-part distance in the job layout. By integration of an off-axis mounted uncooled thermal detector, the solidification as well as the layer deposition are monitored and evaluated. This enables the identification of hot spots in an early stage during the solidification process and helps to avoid process interrupts. Potential quality indicators are derived from spatially resolved measurement data and are correlated to the resulting part properties. A model of heat dissipation is presented based on the measurement of the material response for varying heat input. Current results show the feasibility of process surveillance by thermography for a limited section of the building platform in a commercial system.

  11. Bayesian site selection for fast Gaussian process regression

    KAUST Repository

    Pourhabib, Arash

    2014-02-05

    Gaussian Process (GP) regression is a popular method in the field of machine learning and computer experiment designs; however, its ability to handle large data sets is hindered by the computational difficulty in inverting a large covariance matrix. Likelihood approximation methods were developed as a fast GP approximation, thereby reducing the computation cost of GP regression by utilizing a much smaller set of unobserved latent variables called pseudo points. This article reports a further improvement to the likelihood approximation methods by simultaneously deciding both the number and locations of the pseudo points. The proposed approach is a Bayesian site selection method where both the number and locations of the pseudo inputs are parameters in the model, and the Bayesian model is solved using a reversible jump Markov chain Monte Carlo technique. Through a number of simulated and real data sets, it is demonstrated that with appropriate priors chosen, the Bayesian site selection method can produce a good balance between computation time and prediction accuracy: it is fast enough to handle large data sets that a full GP is unable to handle, and it improves, quite often remarkably, the prediction accuracy, compared with the existing likelihood approximations. © 2014 Taylor and Francis Group, LLC.

  12. Homology modeling, docking studies and molecular dynamic simulations using graphical processing unit architecture to probe the type-11 phosphodiesterase catalytic site: a computational approach for the rational design of selective inhibitors.

    Science.gov (United States)

    Cichero, Elena; D'Ursi, Pasqualina; Moscatelli, Marco; Bruno, Olga; Orro, Alessandro; Rotolo, Chiara; Milanesi, Luciano; Fossa, Paola

    2013-12-01

    Phosphodiesterase 11 (PDE11) is the latest isoform of the PDEs family to be identified, acting on both cyclic adenosine monophosphate and cyclic guanosine monophosphate. The initial reports of PDE11 found evidence for PDE11 expression in skeletal muscle, prostate, testis, and salivary glands; however, the tissue distribution of PDE11 still remains a topic of active study and some controversy. Given the sequence similarity between PDE11 and PDE5, several PDE5 inhibitors have been shown to cross-react with PDE11. Accordingly, many non-selective inhibitors, such as IBMX, zaprinast, sildenafil, and dipyridamole, have been documented to inhibit PDE11. Only recently, a series of dihydrothieno[3,2-d]pyrimidin-4(3H)-one derivatives proved to be selective toward the PDE11 isoform. In the absence of experimental data about PDE11 X-ray structures, we found interesting to gain a better understanding of the enzyme-inhibitor interactions using in silico simulations. In this work, we describe a computational approach based on homology modeling, docking, and molecular dynamics simulation to derive a predictive 3D model of PDE11. Using a Graphical Processing Unit architecture, it is possible to perform long simulations, find stable interactions involved in the complex, and finally to suggest guideline for the identification and synthesis of potent and selective inhibitors.

  13. EIS and adjunct electrical modeling for material selection by evaluating two mild steels for use in super-alkaline mineral processing

    DEFF Research Database (Denmark)

    Bakhtiyari, Leila; Moghimi, Fereshteh; Mansouri, Seyed Soheil

    2012-01-01

    The production of metal concentrates during mineral processing of ferrous and non-ferrous metals involves a variety of highly corrosive chemicals which deteriorate common mild steel as the material of choice in the construction of such lines, through rapid propagation of localized pitting...

  14. Selection Criteria in Regime Switching Conditional Volatility Models

    Directory of Open Access Journals (Sweden)

    Thomas Chuffart

    2015-05-01

    Full Text Available A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.

  15. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  16. Melody Track Selection Using Discriminative Language Model

    Science.gov (United States)

    Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong

    In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.

  17. Sato Processes in Default Modeling

    DEFF Research Database (Denmark)

    Kokholm, Thomas; Nicolato, Elisa

    In reduced form default models, the instantaneous default intensity is classically the modeling object. Survival probabilities are then given by the Laplace transform of the cumulative hazard defined as the integrated intensity process. Instead, recent literature has shown a tendency towards...... specifying the cumulative hazard process directly. Within this framework we present a new model class where cumulative hazards are described by self-similar additive processes, also known as Sato processes. Furthermore we also analyze specifications obtained via a simple deterministic time......-change of a homogeneous Levy process. While the processes in these two classes share the same average behavior over time, the associated intensities exhibit very different properties. Concrete specifications are calibrated to data on the single names included in the iTraxx Europe index. The performances are compared...

  18. Sato Processes in Default Modeling

    DEFF Research Database (Denmark)

    Kokholm, Thomas; Nicolato, Elisa

    2010-01-01

    In reduced form default models, the instantaneous default intensity is the classical modeling object. Survival probabilities are then given by the Laplace transform of the cumulative hazard defined as the integrated intensity process. Instead, recent literature tends to specify the cumulative...... hazard process directly. Within this framework we present a new model class where cumulative hazards are described by self-similar additive processes, also known as Sato processes. Furthermore, we analyze specifications obtained via a simple deterministic time-change of a homogeneous Lévy process. While...... the processes in these two classes share the same average behavior over time, the associated intensities exhibit very different properties. Concrete specifications are calibrated to data on all the single names included in the iTraxx Europe index. The performances are compared with those of the classical CIR...

  19. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  20. Modeling HIV-1 drug resistance as episodic directional selection.

    Directory of Open Access Journals (Sweden)

    Ben Murrell

    Full Text Available The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  1. Risk calculations in the manufacturing technology selection process

    DEFF Research Database (Denmark)

    Farooq, S.; O'Brien, C.

    2010-01-01

    and supports an industrial manager in achieving objective and comprehensive decisions regarding selection of a manufacturing technology. Originality/value - The paper explains the process of risk calculation in manufacturing technology selection by dividing the decision-making environment into manufacturing......Purpose - The purpose of this paper is to present result obtained from a developed technology selection framework and provide a detailed insight into the risk calculations and their implications in manufacturing technology selection process. Design/methodology/approach - The results illustrated...... in the paper are the outcome of an action research study that was conducted in an aerospace company. Findings - The paper highlights the role of risk calculations in manufacturing technology selection process by elaborating the contribution of risk associated with manufacturing technology alternatives...

  2. Modelling of CWS combustion process

    Science.gov (United States)

    Rybenko, I. A.; Ermakova, L. A.

    2016-10-01

    The paper considers the combustion process of coal water slurry (CWS) drops. The physico-chemical process scheme consisting of several independent parallel-sequential stages is offered. This scheme of drops combustion process is proved by the particle size distribution test and research stereomicroscopic analysis of combustion products. The results of mathematical modelling and optimization of stationary regimes of CWS combustion are provided. During modeling the problem of defining possible equilibrium composition of products, which can be obtained as a result of CWS combustion processes at different temperatures, is solved.

  3. Natural Selection Is a Sorting Process: What Does that Mean?

    Science.gov (United States)

    Price, Rebecca M.

    2013-01-01

    To learn why natural selection acts only on existing variation, students categorize processes as either creative or sorting. This activity helps students confront the misconception that adaptations evolve because species need them.

  4. Process for selected gas oxide removal by radiofrequency catalysts

    Science.gov (United States)

    Cha, Chang Y.

    1993-01-01

    This process to remove gas oxides from flue gas utilizes adsorption on a char bed subsequently followed by radiofrequency catalysis enhancing such removal through selected reactions. Common gas oxides include SO.sub.2 and NO.sub.x.

  5. A process for selection and training of super-users for ERP implementation projects

    DEFF Research Database (Denmark)

    Danielsen, Peter; Sandfeld Hansen, Kenneth; Helt, Mads

    2017-01-01

    -users in practice. To address this research gap, we analyze the case of an ERP implementation program at a large manufacturing company. We combine Katz’s widely accepted skill measurement model with the process observed in practice to describe and test a model of super-user selection and training. The resulting...... model contains a systematic process of super-user development and highlights the specific skillsets required in different phases of the selection and training process. Our results from a comparative assessment of management expectations and super-user skills in the ERP program show that the model can...

  6. Social Models: Blueprints or Processes?

    Science.gov (United States)

    Little, Graham R.

    1981-01-01

    Discusses the nature and implications of two different models for societal planning: (1) the problem-solving process approach based on Karl Popper; and (2) the goal-setting "blueprint" approach based on Karl Marx. (DC)

  7. Fuzzy MCDM Model for Risk Factor Selection in Construction Projects

    Directory of Open Access Journals (Sweden)

    Pejman Rezakhani

    2012-11-01

    Full Text Available Risk factor selection is an important step in a successful risk management plan. There are many risk factors in a construction project and by an effective and systematic risk selection process the most critical risks can be distinguished to have more attention. In this paper through a comprehensive literature survey, most significant risk factors in a construction project are classified in a hierarchical structure. For an effective risk factor selection, a modified rational multi criteria decision making model (MCDM is developed. This model is a consensus rule based model and has the optimization property of rational models. By applying fuzzy logic to this model, uncertainty factors in group decision making such as experts` influence weights, their preference and judgment for risk selection criteria will be assessed. Also an intelligent checking process to check the logical consistency of experts` preferences will be implemented during the decision making process. The solution inferred from this method is in the highest degree of acceptance of group members. Also consistency of individual preferences is checked by some inference rules. This is an efficient and effective approach to prioritize and select risks based on decisions made by group of experts in construction projects. The applicability of presented method is assessed through a case study.

  8. Bayesian variable selection for latent class models.

    Science.gov (United States)

    Ghosh, Joyee; Herring, Amy H; Siega-Riz, Anna Maria

    2011-09-01

    In this article, we develop a latent class model with class probabilities that depend on subject-specific covariates. One of our major goals is to identify important predictors of latent classes. We consider methodology that allows estimation of latent classes while allowing for variable selection uncertainty. We propose a Bayesian variable selection approach and implement a stochastic search Gibbs sampler for posterior computation to obtain model-averaged estimates of quantities of interest such as marginal inclusion probabilities of predictors. Our methods are illustrated through simulation studies and application to data on weight gain during pregnancy, where it is of interest to identify important predictors of latent weight gain classes.

  9. Model feedstock supply processing plants

    Directory of Open Access Journals (Sweden)

    V. M. Bautin

    2013-01-01

    Full Text Available The model of raw providing the processing enterprises entering into vertically integrated structure on production and processing of dairy raw materials, differing by an orientation on achievement of cumulative effect by the integrated structure acting as criterion function which maximizing is reached by optimization of capacities, volumes of deliveries of raw materials and its qualitative characteristics, costs of industrial processing of raw materials and demand for dairy production is developed.

  10. Pedagogic process modeling: Humanistic-integrative approach

    Directory of Open Access Journals (Sweden)

    Boritko Nikolaj M.

    2007-01-01

    Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .

  11. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2013-01-01

    Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.

  12. A review of channel selection algorithms for EEG signal processing

    Science.gov (United States)

    Alotaiby, Turky; El-Samie, Fathi E. Abd; Alshebeili, Saleh A.; Ahmad, Ishtiaq

    2015-12-01

    Digital processing of electroencephalography (EEG) signals has now been popularly used in a wide variety of applications such as seizure detection/prediction, motor imagery classification, mental task classification, emotion classification, sleep state classification, and drug effects diagnosis. With the large number of EEG channels acquired, it has become apparent that efficient channel selection algorithms are needed with varying importance from one application to another. The main purpose of the channel selection process is threefold: (i) to reduce the computational complexity of any processing task performed on EEG signals by selecting the relevant channels and hence extracting the features of major importance, (ii) to reduce the amount of overfitting that may arise due to the utilization of unnecessary channels, for the purpose of improving the performance, and (iii) to reduce the setup time in some applications. Signal processing tools such as time-domain analysis, power spectral estimation, and wavelet transform have been used for feature extraction and hence for channel selection in most of channel selection algorithms. In addition, different evaluation approaches such as filtering, wrapper, embedded, hybrid, and human-based techniques have been widely used for the evaluation of the selected subset of channels. In this paper, we survey the recent developments in the field of EEG channel selection methods along with their applications and classify these methods according to the evaluation approach.

  13. Evidence accumulation as a model for lexical selection.

    Science.gov (United States)

    Anders, R; Riès, S; van Maanen, L; Alario, F X

    2015-11-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Selection process for trade study: Graphite Composite Primary Structure (GCPS)

    Science.gov (United States)

    Greenberg, H. S.

    1994-01-01

    This TA 2 document describes the selection process that will be used to identify the most suitable structural configuration for an SSTO winged vehicle capable of delivering 25,000 lbs to a 220 nm circular orbit at 51.6 degree inclination. The most suitable unpressurized graphite composite structures and material selections is within this configuration and will be the prototype design for subsequent design and analysis and the basis for the design and fabrication of payload bay, wing, and thrust structure full scale test articles representing segments of the prototype structures. The selection process for this TA 2 trade study is the same as that for the TA 1 trade study. As the trade study progresses additional insight may result in modifications to the selection criteria within this process. Such modifications will result in an update of this document as appropriate.

  15. AN EXPERT SYSTEM MODEL FOR THE SELECTION OF TECHNICAL PERSONNEL

    Directory of Open Access Journals (Sweden)

    Emine COŞGUN

    2005-03-01

    Full Text Available In this study, a model has been developed for the selection of the technical personnel. In the model Visual Basic has been used as user interface, Microsoft Access has been utilized as database system and CLIPS program has been used as expert system program. The proposed model has been developed by utilizing expert system technology. In the personnel selection process, only the pre-evaluation of the applicants has been taken into consideration. Instead of replacing the expert himself, a decision support program has been developed to analyze the data gathered from the job application forms. The attached study will assist the expert to make faster and more accurate decisions.

  16. MODEL SELECTION FOR LOG-LINEAR MODELS OF CONTINGENCY TABLES

    Institute of Scientific and Technical Information of China (English)

    ZHAO Lincheng; ZHANG Hong

    2003-01-01

    In this paper, we propose an information-theoretic-criterion-based model selection procedure for log-linear model of contingency tables under multinomial sampling, and establish the strong consistency of the method under some mild conditions. An exponential bound of miss detection probability is also obtained. The selection procedure is modified so that it can be used in practice. Simulation shows that the modified method is valid. To avoid selecting the penalty coefficient in the information criteria, an alternative selection procedure is given.

  17. Computer Modelling of Dynamic Processes

    Directory of Open Access Journals (Sweden)

    B. Rybakin

    2000-10-01

    Full Text Available Results of numerical modeling of dynamic problems are summed in the article up. These problems are characteristic for various areas of human activity, in particular for problem solving in ecology. The following problems are considered in the present work: computer modeling of dynamic effects on elastic-plastic bodies, calculation and determination of performances of gas streams in gas cleaning equipment, modeling of biogas formation processes.

  18. Command Process Modeling & Risk Analysis

    Science.gov (United States)

    Meshkat, Leila

    2011-01-01

    Commanding Errors may be caused by a variety of root causes. It's important to understand the relative significance of each of these causes for making institutional investment decisions. One of these causes is the lack of standardized processes and procedures for command and control. We mitigate this problem by building periodic tables and models corresponding to key functions within it. These models include simulation analysis and probabilistic risk assessment models.

  19. Path modeling and process control

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Rodionova, O.; Pomerantsev, A.

    2007-01-01

    and having three or more stages. The methods are applied to a process control of a multi-stage production process having 25 variables and one output variable. When moving along the process, variables change their roles. It is shown how the methods of path modeling can be applied to estimate variables...... of the next stage with the purpose of obtaining optimal or almost optimal quality of the output variable. An important aspect of the methods presented is the possibility of extensive graphic analysis of data that can provide the engineer with a detailed view of the multi-variate variation in data.......Many production processes are carried out in stages. At the end of each stage, the production engineer can analyze the intermediate results and correct process parameters (variables) of the next stage. Both analysis of the process and correction to process parameters at next stage should...

  20. Modelling Hospital Materials Management Processes

    Directory of Open Access Journals (Sweden)

    Raffaele Iannone

    2013-06-01

    integrated and detailed analysis and description model for hospital materials management data and tasks, which is able to tackle information from patient requirements to usage, from replenishment requests to supplying and handling activities. The model takes account of medical risk reduction, traceability and streamlined processes perspectives. Second, the paper translates this information into a business process model and mathematical formalization.The study provides a useful guide to the various relevant technology‐related, management and business issues, laying the foundations of an efficient reengineering of the supply chain to reduce healthcare costs and improve the quality of care.

  1. Testing exclusion restrictions and additive separability in sample selection models

    DEFF Research Database (Denmark)

    Huber, Martin; Mellace, Giovanni

    2014-01-01

    Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction of these......Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction...... of these assumptions by applying the approach of Huber and Mellace (Testing instrument validity for LATE identification based on inequality moment constraints, 2011) (for testing instrument validity under treatment endogeneity) to the sample selection framework. We show that the exclusion restriction and additive...... separability imply two testable inequality constraints that come from both point identifying and bounding the outcome distribution of the subpopulation that is always selected/observed. We apply the tests to two variables for which the exclusion restriction is frequently invoked in female wage regressions: non...

  2. Adverse selection model regarding tobacco consumption

    Directory of Open Access Journals (Sweden)

    Dumitru MARIN

    2006-01-01

    Full Text Available The impact of introducing a tax on tobacco consumption can be studied trough an adverse selection model. The objective of the model presented in the following is to characterize the optimal contractual relationship between the governmental authorities and the two type employees: smokers and non-smokers, taking into account that the consumers’ decision to smoke or not represents an element of risk and uncertainty. Two scenarios are run using the General Algebraic Modeling Systems software: one without taxes set on tobacco consumption and another one with taxes set on tobacco consumption, based on an adverse selection model described previously. The results of the two scenarios are compared in the end of the paper: the wage earnings levels and the social welfare in case of a smoking agent and in case of a non-smoking agent.

  3. Concurrent materials and process selection in conceptual design

    Energy Technology Data Exchange (ETDEWEB)

    Kleban, Stephen D.; Knorovsky, Gerald A.

    2000-08-16

    A method for concurrent selection of materials and a joining process based on product requirements using a knowledge-based, constraint satisfaction approach facilitates the product design and manufacturing process. Using a Windows-based computer video display and a data base of materials and their properties, the designer can ascertain the preferred composition of two parts based on various operating/environmental constraints such as load, temperature, lifetime, etc. Optimum joinder of the two parts may simultaneously be determined using a joining process data base based upon the selected composition of the components as well as the operating/environmental constraints.

  4. Comparative analysis of business rules and business process modeling languages

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2013-03-01

    Full Text Available During developing an information system is important to create clear models and choose suitable modeling languages. The article analyzes the SRML, SBVR, PRR, SWRL, OCL rules specifying language and UML, DFD, CPN, EPC and IDEF3 BPMN business process modeling language. The article presents business rules and business process modeling languages theoretical comparison. The article according to selected modeling aspects of the comparison between different business process modeling languages ​​and business rules representation languages sets. Also, it is selected the best fit of language set for three layer framework for business rule based software modeling.

  5. Cellular scanning strategy for selective laser melting: Generating reliable, optimized scanning paths and processing parameters

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2015-01-01

    Selective laser melting is yet to become a standardized industrial manufacturing technique. The process continues to suffer from defects such as distortions, residual stresses, localized deformations and warpage caused primarily due to the localized heating, rapid cooling and high temperature...... gradients that occur during the process. While process monitoring and control of selective laser melting is an active area of research, establishing the reliability and robustness of the process still remains a challenge.In this paper, a methodology for generating reliable, optimized scanning paths...... and process parameters for selective laser melting of a standard sample is introduced. The processing of the sample is simulated by sequentially coupling a calibrated 3D pseudo-analytical thermal model with a 3D finite element mechanical model.The optimized processing parameters are subjected to a Monte Carlo...

  6. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  7. A Theoretical Model for Selective Exposure Research.

    Science.gov (United States)

    Roloff, Michael E.; Noland, Mark

    This study tests the basic assumptions underlying Fishbein's Model of Attitudes by correlating an individual's selective exposure to types of television programs (situation comedies, family drama, and action/adventure) with the attitudinal similarity between individual attitudes and attitudes characterized on the programs. Twenty-three college…

  8. Robust model selection and the statistical classification of languages

    Science.gov (United States)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating

  9. Hencky's model for elastomer forming process

    Science.gov (United States)

    Oleinikov, A. A.; Oleinikov, A. I.

    2016-08-01

    In the numerical simulation of elastomer forming process, Henckys isotropic hyperelastic material model can guarantee relatively accurate prediction of strain range in terms of large deformations. It is shown, that this material model prolongate Hooke's law from the area of infinitesimal strains to the area of moderate ones. New representation of the fourth-order elasticity tensor for Hencky's hyperelastic isotropic material is obtained, it possesses both minor symmetries, and the major symmetry. Constitutive relations of considered model is implemented into MSC.Marc code. By calculating and fitting curves, the polyurethane elastomer material constants are selected. Simulation of equipment for elastomer sheet forming are considered.

  10. Direct Slicing Based on Material Performance and Process Parameters for Selective Laser Sintering

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Direct slicing from CAD models to generate sectional contours of thepart to be sintered for Selective Laser Sintering (SLS) may overcome inherent disadvantages of using a Stereo Lithography ( STL ) format. In this paper, a direct slicing procedure is proposed for Selective Laser Sintering based on material performance and process parameters. Slicing thickness depends on the 3 D geometric model,material performance and process parameters. The relationship among material performance, process parameters and the largest slicing thickness is established using analysis of a sintering temperature field. A dynamic linked library is developed to realize direct slicing from a CAD model.

  11. Introduction to gas lasers with emphasis on selective excitation processes

    CERN Document Server

    Willett, Colin S

    1974-01-01

    Introduction to Gas Lasers: Population Inversion Mechanisms focuses on important processes in gas discharge lasers and basic atomic collision processes that operate in a gas laser. Organized into six chapters, this book first discusses the historical development and basic principles of gas lasers. Subsequent chapters describe the selective excitation processes in gas discharges and the specific neutral, ionized and molecular laser systems. This book will be a valuable reference on the behavior of gas-discharge lasers to anyone already in the field.

  12. Additive Manufacturing Processes: Selective Laser Melting, Electron Beam Melting and Binder Jetting-Selection Guidelines.

    Science.gov (United States)

    Gokuldoss, Prashanth Konda; Kolla, Sri; Eckert, Jürgen

    2017-06-19

    Additive manufacturing (AM), also known as 3D printing or rapid prototyping, is gaining increasing attention due to its ability to produce parts with added functionality and increased complexities in geometrical design, on top of the fact that it is theoretically possible to produce any shape without limitations. However, most of the research on additive manufacturing techniques are focused on the development of materials/process parameters/products design with different additive manufacturing processes such as selective laser melting, electron beam melting, or binder jetting. However, we do not have any guidelines that discuss the selection of the most suitable additive manufacturing process, depending on the material to be processed, the complexity of the parts to be produced, or the design considerations. Considering the very fact that no reports deal with this process selection, the present manuscript aims to discuss the different selection criteria that are to be considered, in order to select the best AM process (binder jetting/selective laser melting/electron beam melting) for fabricating a specific component with a defined set of material properties.

  13. Additive Manufacturing Processes: Selective Laser Melting, Electron Beam Melting and Binder Jetting—Selection Guidelines

    Directory of Open Access Journals (Sweden)

    Prashanth Konda Gokuldoss

    2017-06-01

    Full Text Available Additive manufacturing (AM, also known as 3D printing or rapid prototyping, is gaining increasing attention due to its ability to produce parts with added functionality and increased complexities in geometrical design, on top of the fact that it is theoretically possible to produce any shape without limitations. However, most of the research on additive manufacturing techniques are focused on the development of materials/process parameters/products design with different additive manufacturing processes such as selective laser melting, electron beam melting, or binder jetting. However, we do not have any guidelines that discuss the selection of the most suitable additive manufacturing process, depending on the material to be processed, the complexity of the parts to be produced, or the design considerations. Considering the very fact that no reports deal with this process selection, the present manuscript aims to discuss the different selection criteria that are to be considered, in order to select the best AM process (binder jetting/selective laser melting/electron beam melting for fabricating a specific component with a defined set of material properties.

  14. Additive Manufacturing Processes: Selective Laser Melting, Electron Beam Melting and Binder Jetting—Selection Guidelines

    Science.gov (United States)

    Konda Gokuldoss, Prashanth; Kolla, Sri; Eckert, Jürgen

    2017-01-01

    Additive manufacturing (AM), also known as 3D printing or rapid prototyping, is gaining increasing attention due to its ability to produce parts with added functionality and increased complexities in geometrical design, on top of the fact that it is theoretically possible to produce any shape without limitations. However, most of the research on additive manufacturing techniques are focused on the development of materials/process parameters/products design with different additive manufacturing processes such as selective laser melting, electron beam melting, or binder jetting. However, we do not have any guidelines that discuss the selection of the most suitable additive manufacturing process, depending on the material to be processed, the complexity of the parts to be produced, or the design considerations. Considering the very fact that no reports deal with this process selection, the present manuscript aims to discuss the different selection criteria that are to be considered, in order to select the best AM process (binder jetting/selective laser melting/electron beam melting) for fabricating a specific component with a defined set of material properties. PMID:28773031

  15. PROPOSAL OF AN EMPIRICAL MODEL FOR SUPPLIERS SELECTION

    Directory of Open Access Journals (Sweden)

    Paulo Ávila

    2015-03-01

    Full Text Available The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, trough the literature review, there were identified five broad suppliers selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. Thereafter, a survey was elaborated and companies were contacted in order to answer which factors have more relevance in their decisions to choose the suppliers. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP method or Simple Multi-Attribute Rating Technique (SMART. The result of the research undertaken by the authors is a reference model that represents a decision making support for the suppliers/partners selection process.

  16. Model selection for radiochromic film dosimetry

    CERN Document Server

    Méndez, Ignasi

    2015-01-01

    The purpose of this study was to find the most accurate model for radiochromic film dosimetry by comparing different channel independent perturbation models. A model selection approach based on (algorithmic) information theory was followed, and the results were validated using gamma-index analysis on a set of benchmark test cases. Several questions were addressed: (a) whether incorporating the information of the non-irradiated film, by scanning prior to irradiation, improves the results; (b) whether lateral corrections are necessary when using multichannel models; (c) whether multichannel dosimetry produces better results than single-channel dosimetry; (d) which multichannel perturbation model provides more accurate film doses. It was found that scanning prior to irradiation and applying lateral corrections improved the accuracy of the results. For some perturbation models, increasing the number of color channels did not result in more accurate film doses. Employing Truncated Normal perturbations was found to...

  17. An International Perspective on Pharmacy Student Selection Policies and Processes.

    Science.gov (United States)

    Shaw, John; Kennedy, Julia; Jensen, Maree; Sheridan, Janie

    2015-10-25

    Objective. To reflect on selection policies and procedures for programs at pharmacy schools that are members of an international alliance of universities (Universitas 21). Methods. A questionnaire on selection policies and procedures was distributed to admissions directors at participating schools. Results. Completed questionnaires were received from 7 schools in 6 countries. Although marked differences were noted in the programs in different countries, there were commonalities in the selection processes. There was an emphasis on previous academic performance, especially in science subjects. With one exception, all schools had some form of interview, with several having moved to multiple mini-interviews in recent years. Conclusion. The majority of pharmacy schools in this survey relied on traditional selection processes. While there was increasing use of multiple mini-interviews, the authors suggest that additional new approaches may be required in light of the changing nature of the profession.

  18. Selection of Leading Industry in Anshun Experimental District Based on Analytic Hierarchy Process

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Analytic Hierarchy Process is selected according to the selection method of leading industries by both domestic and foreign scholars. Leading industries which can accelerate the overall economic development of Anshun Experimental District is taken as the target layer; and market demand, efficiency standards and local conditions are taken as the criterion layers, so as to construct the select model of leading industry and to choose the leading industry in Anshun Experimental District. Result shows that the priority order of the leading industry selection in Anshun Experimental District is as follows: tourism > pharmacy > transportation > energy > food processing > characteristic agriculture > package and printing > automobile industry > mining > electric engineering.

  19. Portfolio Selection Model with Derivative Securities

    Institute of Scientific and Technical Information of China (English)

    王春峰; 杨建林; 蒋祥林

    2003-01-01

    Traditional portfolio theory assumes that the return rate of portfolio follows normality. However, this assumption is not true when derivative assets are incorporated. In this paper a portfolio selection model is developed based on utility function which can capture asymmetries in random variable distributions. Other realistic conditions are also considered, such as liabilities and integer decision variables. Since the resulting model is a complex mixed-integer nonlinear programming problem, simulated annealing algorithm is applied for its solution. A numerical example is given and sensitivity analysis is conducted for the model.

  20. A process for selection and training of super-users for ERP implementation projects

    DEFF Research Database (Denmark)

    Danielsen, Peter; Sandfeld Hansen, Kenneth; Helt, Mads

    2017-01-01

    The concept of super-users as a means to facilitate ERP implementation projects has recently taken a foothold in practice, but is still largely overlooked in research. In particular, little is known about the selection and training processes required to successfully develop skilled super......-users in practice. To address this research gap, we analyze the case of an ERP implementation program at a large manufacturing company. We combine Katz’s widely accepted skill measurement model with the process observed in practice to describe and test a model of super-user selection and training. The resulting...... model contains a systematic process of super-user development and highlights the specific skillsets required in different phases of the selection and training process. Our results from a comparative assessment of management expectations and super-user skills in the ERP program show that the model can...

  1. ECONOMIC MODELING PROCESSES USING MATLAB

    Directory of Open Access Journals (Sweden)

    Anamaria G. MACOVEI

    2008-06-01

    Full Text Available To study economic phenomena and processes using mathem atical modeling, and to determine the approximatesolution to a problem we need to choose a method of calculation and a numerical computer program, namely thepackage of programs MatLab. Any economic process or phenomenon is a mathematical description of h is behavior,and thus draw up an economic and mathematical model that has the following stages: formulation of the problem, theanalysis process modeling, the production model and design verification, validation and implementation of the model.This article is presented an economic model and its modeling is using mathematical equations and software packageMatLab, which helps us approximation effective solution. As data entry is considered the net cost, the cost of direct andtotal cost and the link between them. I presented the basic formula for determining the total cost. Economic modelcalculations were made in MatLab software package and with graphic representation of its interpretation of the resultsachieved in terms of our specific problem.

  2. Attribute based selection of thermoplastic resin for vacuum infusion process

    DEFF Research Database (Denmark)

    Prabhakaran, R.T. Durai; Lystrup, Aage; Løgstrup Andersen, Tom

    2011-01-01

    The composite industry looks toward a new material system (resins) based on thermoplastic polymers for the vacuum infusion process, similar to the infusion process using thermosetting polymers. A large number of thermoplastics are available in the market with a variety of properties suitable...... for different engineering applications, and few of those are available in a not yet polymerised form suitable for resin infusion. The proper selection of a new resin system among these thermoplastic polymers is a concern for manufactures in the current scenario and a special mathematical tool would...... be beneficial. In this paper, the authors introduce a new decision making tool for resin selection based on significant attributes. This article provides a broad overview of suitable thermoplastic material systems for vacuum infusion process available in today’s market. An illustrative example—resin selection...

  3. Modeling of the Hydroentanglement Process

    Directory of Open Access Journals (Sweden)

    Ping Xiang

    2006-11-01

    Full Text Available Mechanical performance of hydroentangled nonwovens is determined by the degree of the fiber entanglement, which depends on parameters of the fibers, fiberweb, forming surface, water jet and the process speed. This paper develops a computational fluid dynamics model of the hydroentanglement process. Extensive comparison with experimental data showed that the degree of fiber entanglement is linearly related to flow vorticity in the fiberweb, which is induced by impinging water jets. The fiberweb is modeled as a porous material of uniform porosity and the actual geometry of forming wires is accounted for in the model. Simulation results are compared with experimental data for a Perfojet ® sleeve and four woven forming surfaces. Additionally, the model is used to predict the effect of fiberweb thickness on the degree of fiber entanglement for different forming surfaces.

  4. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  5. On Model Selection Criteria in Multimodel Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ye, Ming; Meyer, Philip D.; Neuman, Shlomo P.

    2008-03-21

    Hydrologic systems are open and complex, rendering them prone to multiple conceptualizations and mathematical descriptions. There has been a growing tendency to postulate several alternative hydrologic models for a site and use model selection criteria to (a) rank these models, (b) eliminate some of them and/or (c) weigh and average predictions and statistics generated by multiple models. This has led to some debate among hydrogeologists about the merits and demerits of common model selection (also known as model discrimination or information) criteria such as AIC [Akaike, 1974], AICc [Hurvich and Tsai, 1989], BIC [Schwartz, 1978] and KIC [Kashyap, 1982] and some lack of clarity about the proper interpretation and mathematical representation of each criterion. In particular, whereas we [Neuman, 2003; Ye et al., 2004, 2005; Meyer et al., 2007] have based our approach to multimodel hydrologic ranking and inference on the Bayesian criterion KIC (which reduces asymptotically to BIC), Poeter and Anderson [2005] and Poeter and Hill [2007] have voiced a preference for the information-theoretic criterion AICc (which reduces asymptotically to AIC). Their preference stems in part from a perception that KIC and BIC require a "true" or "quasi-true" model to be in the set of alternatives while AIC and AICc are free of such an unreasonable requirement. We examine the model selection literature to find that (a) all published rigorous derivations of AIC and AICc require that the (true) model having generated the observational data be in the set of candidate models; (b) though BIC and KIC were originally derived by assuming that such a model is in the set, BIC has been rederived by Cavanaugh and Neath [1999] without the need for such an assumption; (c) KIC reduces to BIC as the number of observations becomes large relative to the number of adjustable model parameters, implying that it likewise does not require the existence of a true model in the set of alternatives; (d) if a true

  6. Modified Claus process probabilistic model

    Energy Technology Data Exchange (ETDEWEB)

    Larraz Mora, R. [Chemical Engineering Dept., Univ. of La Laguna (Spain)

    2006-03-15

    A model is proposed for the simulation of an industrial Claus unit with a straight-through configuration and two catalytic reactors. Process plant design evaluations based on deterministic calculations does not take into account the uncertainties that are associated with the different input variables. A probabilistic simulation method was applied in the Claus model to obtain an impression of how some of these inaccuracies influences plant performance. (orig.)

  7. Process Models for Security Architectures

    Directory of Open Access Journals (Sweden)

    Floarea NASTASE

    2006-01-01

    Full Text Available This paper presents a model for an integrated security system, which can be implemented in any organization. It is based on security-specific standards and taxonomies as ISO 7498-2 and Common Criteria. The functionalities are derived from the classes proposed in the Common Criteria document. In the paper we present the process model for each functionality and also we focus on the specific components.

  8. Multi-enzyme Process Modeling

    DEFF Research Database (Denmark)

    Andrade Santacoloma, Paloma de Gracia

    The subject of this thesis is to develop a methodological framework that can systematically guide mathematical model building for better understanding of multi-enzyme processes. In this way, opportunities for process improvements can be identified by analyzing simulations of either existing...... are affected (in a positive or negative way) by the presence of the other enzymes and compounds in the media. In this thesis the concept of multi-enzyme in-pot term is adopted for processes that are carried out by the combination of enzymes in a single reactor and implemented at pilot or industrial scale...

  9. Hydrological scenarios for two selected Alpine catchments for the 21st century using a stochastic weather generator and enhanced process understanding for modelling of seasonal snow and glacier melt for improved water resources management

    Science.gov (United States)

    Strasser, Ulrich; Schneeberger, Klaus; Dabhi, Hetal; Dubrovsky, Martin; Hanzer, Florian; Marke, Thomas; Oberguggenberger, Michael; Rössler, Ole; Schmieder, Jan; Rotach, Mathias; Stötter, Johann; Weingartner, Rolf

    2016-04-01

    The overall objective of HydroGeM³ is to quantify and assess both water demand and water supply in two coupled human-environment mountain systems, i.e. Lütschine in Switzerland and Ötztaler Ache in Austria. Special emphasis is laid on the analysis of possible future seasonal water scarcity. The hydrological response of high Alpine catchments is characterised by a strong seasonal variability with low runoff in winter and high runoff in spring and summer. Climate change is expected to cause a seasonal shift of the runoff regime and thus it has significant impact on both amount and timing of the release of the available water resources, and thereof, possible future water conflicts. In order to identify and quantify the contribution of snow and ice melt as well as rain to runoff, streamflow composition will be analysed with natural tracers. The results of the field investigations will help to improve the snow and ice melt and runoff modules of two selected hydrological models (i.e. AMUNDSEN and WaSiM) which are used to investigate the seasonal water availability under current and future climate conditions. Together, they comprise improved descriptions of boundary layer and surface melt processes (AMUNDSEN), and of streamflow runoff generation (WaSiM). Future meteorological forcing for the modelling until the end of the century will be provided by both a stochastic multi-site weather generator, and downscaled climate model output. Both approches will use EUROCORDEX data as input. The water demand in the selected study areas is quantified for the relevant societal sectors, e.g. agriculture, hydropower generation and (winter) tourism. The comparison of water availability and water demand under current and future climate conditions will allow the identification of possible seasonal bottlenecks of future water supply and resulting conflicts. Thus these investigations can provide a quantitative basis for the development of strategies for sustainable water management in

  10. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  11. A Hybrid Program Projects Selection Model for Nonprofit TV Stations

    Directory of Open Access Journals (Sweden)

    Kuei-Lun Chang

    2015-01-01

    Full Text Available This study develops a hybrid multiple criteria decision making (MCDM model to select program projects for nonprofit TV stations on the basis of managers’ perceptions. By the concept of balanced scorecard (BSC and corporate social responsibility (CSR, we collect criteria for selecting the best program project. Fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Next, considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain the weights of them. To avoid calculation and additional pairwise comparisons of ANP, technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. A case study is presented to demonstrate the applicability of the proposed model.

  12. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: 'Are we actually dealing with a convolutive mixture?'. We try to answer this question for EEG data....

  13. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  14. Skewed factor models using selection mechanisms

    KAUST Repository

    Kim, Hyoung-Moon

    2015-12-21

    Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.

  15. Application of concept selection methodology in IC process design

    Science.gov (United States)

    Kim, Myung-Kul

    1993-01-01

    Search for an effective methodology practical in IC manufacturing process development led to trial of quantitative 'concept selection' methodology in selecting the 'best' alternative for interlevel dielectric (ILD) processes. A cross-functional team selected multi-criteria with scoring guidelines to be used in the definition of the 'best'. The project was targeted for the 3 level metal backend process for sub-micron gate array product. The outcome of the project showed that the maturity of the alternatives has strong influence on the scores, because scores on the adopted criteria such as yield, reliability and maturity will depend on the maturity of a particular process. At the same time, the project took longer than expected since it required data for the multiple criteria. These observations suggest that adopting a simpler procedure that can analyze total inherent controllability of a process would be more effective. The methodology of the DFS (design for simplicity) tools used in analyzing the manufacturability of such electronics products as computers, phones and other consumer electronics products could be used as an 'analogy' in constructing an evaluation method for IC processes that produce devices used in those electronics products. This could be done by focusing on the basic process operation elements rather than the layers that are being built.

  16. Network Model Building (Process Mapping)

    OpenAIRE

    Blau, Gary; Yih, Yuehwern

    2004-01-01

    12 slides Provider Notes:See Project Planning Video (Windows Media) Posted at the bottom are Gary Blau's slides. Before watching, please note that "process mapping" and "modeling" are mentioned in the video and notes. Here they are meant to refer to the NSCORT "project plan"

  17. Language selection in bilingual speech: evidence for inhibitory processes.

    Science.gov (United States)

    Kroll, Judith F; Bobb, Susan C; Misra, Maya; Guo, Taomei

    2008-07-01

    Although bilinguals rarely make random errors of language when they speak, research on spoken production provides compelling evidence to suggest that both languages are active when only one language is spoken (e.g., [Poulisse, N. (1999). Slips of the tongue: Speech errors in first and second language production. Amsterdam/Philadelphia: John Benjamins]). Moreover, the parallel activation of the two languages appears to characterize the planning of speech for highly proficient bilinguals as well as second language learners. In this paper, we first review the evidence for cross-language activity during single word production and then consider the two major alternative models of how the intended language is eventually selected. According to language-specific selection models, both languages may be active but bilinguals develop the ability to selectively attend to candidates in the intended language. The alternative model, that candidates from both languages compete for selection, requires that cross-language activity be modulated to allow selection to occur. On the latter view, the selection mechanism may require that candidates in the nontarget language be inhibited. We consider the evidence for such an inhibitory mechanism in a series of recent behavioral and neuroimaging studies.

  18. Multi-dimensional model order selection

    Directory of Open Access Journals (Sweden)

    Roemer Florian

    2011-01-01

    Full Text Available Abstract Multi-dimensional model order selection (MOS techniques achieve an improved accuracy, reliability, and robustness, since they consider all dimensions jointly during the estimation of parameters. Additionally, from fundamental identifiability results of multi-dimensional decompositions, it is known that the number of main components can be larger when compared to matrix-based decompositions. In this article, we show how to use tensor calculus to extend matrix-based MOS schemes and we also present our proposed multi-dimensional model order selection scheme based on the closed-form PARAFAC algorithm, which is only applicable to multi-dimensional data. In general, as shown by means of simulations, the Probability of correct Detection (PoD of our proposed multi-dimensional MOS schemes is much better than the PoD of matrix-based schemes.

  19. Tracking Models for Optioned Portfolio Selection

    Science.gov (United States)

    Liang, Jianfeng

    In this paper we study a target tracking problem for the portfolio selection involving options. In particular, the portfolio in question contains a stock index and some European style options on the index. A refined tracking-error-variance methodology is adopted to formulate this problem as a multi-stage optimization model. We derive the optimal solutions based on stochastic programming and optimality conditions. Attention is paid to the structure of the optimal payoff function, which is shown to possess rich properties.

  20. New insights in portfolio selection modeling

    OpenAIRE

    Zareei, Abalfazl

    2016-01-01

    Recent advancements in the field of network theory commence a new line of developments in portfolio selection techniques that stands on the ground of perceiving financial market as a network with assets as nodes and links accounting for various types of relationships among financial assets. In the first chapter, we model the shock propagation mechanism among assets via network theory and provide an approach to construct well-diversified portfolios that are resilient to shock propagation and c...

  1. Modeling of the reburning process

    Energy Technology Data Exchange (ETDEWEB)

    Rota, R.; Bonini, F.; Servida, A.; Morbidelli, M.; Carra, S. [Politecnico di Milano, Milano (Italy). Dip. di Chimica Fisica Applicata

    1997-07-01

    Reburning has become a popular method of abating NO{sub x} emission in power plants. Its effectiveness is strongly affected by the interaction between gas phase chemistry and combustion chamber fluid dynamics. Both the mixing of the reactant streams and the elementary reactions in the gas phase control the overall kinetics of the process. This work developed a model coupling a detailed kinetic mechanism to a simplified description of the fluid dynamics of the reburning chamber. The model was checked with reference to experimental data from the literature. Detailed kinetic modeling was found to be essential to describe the reburning process, since the fluid dynamics of the reactor have a strong influence on reactions within. 20 refs., 9 figs., 3 tabs.

  2. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  3. Bayesian model selection in Gaussian regression

    CERN Document Server

    Abramovich, Felix

    2009-01-01

    We consider a Bayesian approach to model selection in Gaussian linear regression, where the number of predictors might be much larger than the number of observations. From a frequentist view, the proposed procedure results in the penalized least squares estimation with a complexity penalty associated with a prior on the model size. We investigate the optimality properties of the resulting estimator. We establish the oracle inequality and specify conditions on the prior that imply its asymptotic minimaxity within a wide range of sparse and dense settings for "nearly-orthogonal" and "multicollinear" designs.

  4. Model Selection in Data Analysis Competitions

    DEFF Research Database (Denmark)

    Wind, David Kofoed; Winther, Ole

    2014-01-01

    The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platform...... Kaggle. In this paper, we will state and try to verify a set of qualitative hypotheses about predictive modelling, both in general and in the scope of data analysis competitions. To verify our hypotheses we will look at previous competitions and their outcomes, use qualitative interviews with top...

  5. Selecting Optimal Subset of Features for Student Performance Model

    Directory of Open Access Journals (Sweden)

    Hany M. Harb

    2012-09-01

    Full Text Available Educational data mining (EDM is a new growing research area and the essence of data mining concepts are used in the educational field for the purpose of extracting useful information on the student behavior in the learning process. Classification methods like decision trees, rule mining, and Bayesian network, can be applied on the educational data for predicting the student behavior like performance in an examination. This prediction may help in student evaluation. As the feature selection influences the predictive accuracy of any performance model, it is essential to study elaborately the effectiveness of student performance model in connection with feature selection techniques. The main objective of this work is to achieve high predictive performance by adopting various feature selection techniques to increase the predictive accuracy with least number of features. The outcomes show a reduction in computational time and constructional cost in both training and classification phases of the student performance model.

  6. Modeling selective attention using a neuromorphic analog VLSI device.

    Science.gov (United States)

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  7. Process for selecting polyhydroxyalkanoate (PHA) producing micro-organisms

    NARCIS (Netherlands)

    Van Loosdrecht, M.C.M.; Kleerebezem, R.; Jian, Y.; Johnson, K.

    2009-01-01

    The invention relates to a process for selecting a polyhydroxyalkanoate (PHA) producing micro-organism from a natural source comprising a variety of micro-organisms, comprising steps of preparing a fermentation broth comprising the natural source and nutrients in water; creating and maintaining

  8. Process for selecting polyhydroxyalkanoate (PHA) producing micro-organisms

    NARCIS (Netherlands)

    Van Loosdrecht, M.C.M.; Kleerebezem, R.; Jian, Y.; Johnson, K.

    2009-01-01

    The invention relates to a process for selecting a polyhydroxyalkanoate (PHA) producing micro-organism from a natural source comprising a variety of micro-organisms, comprising steps of preparing a fermentation broth comprising the natural source and nutrients in water; creating and maintaining aero

  9. Efficiency and Effectiveness of a Resident Assistant Selection Process.

    Science.gov (United States)

    Broitman, Thomas

    The American phenomenon of "more is better" extends a value-loaded concept implicit in budget preparation. At any university, the scope, magnitude and cost of a residence hall assistant program selection process is a metaphor to illustrate efficiency and effectiveness of human resources. In order to discover a more efficient and…

  10. Bayesian selection of nucleotide substitution models and their site assignments.

    Science.gov (United States)

    Wu, Chieh-Hsi; Suchard, Marc A; Drummond, Alexei J

    2013-03-01

    Probabilistic inference of a phylogenetic tree from molecular sequence data is predicated on a substitution model describing the relative rates of change between character states along the tree for each site in the multiple sequence alignment. Commonly, one assumes that the substitution model is homogeneous across sites within large partitions of the alignment, assigns these partitions a priori, and then fixes their underlying substitution model to the best-fitting model from a hierarchy of named models. Here, we introduce an automatic model selection and model averaging approach within a Bayesian framework that simultaneously estimates the number of partitions, the assignment of sites to partitions, the substitution model for each partition, and the uncertainty in these selections. This new approach is implemented as an add-on to the BEAST 2 software platform. We find that this approach dramatically improves the fit of the nucleotide substitution model compared with existing approaches, and we show, using a number of example data sets, that as many as nine partitions are required to explain the heterogeneity in nucleotide substitution process across sites in a single gene analysis. In some instances, this improved modeling of the substitution process can have a measurable effect on downstream inference, including the estimated phylogeny, relative divergence times, and effective population size histories.

  11. Models of microbiome evolution incorporating host and microbial selection.

    Science.gov (United States)

    Zeng, Qinglong; Wu, Steven; Sukumaran, Jeet; Rodrigo, Allen

    2017-09-25

    Numerous empirical studies suggest that hosts and microbes exert reciprocal selective effects on their ecological partners. Nonetheless, we still lack an explicit framework to model the dynamics of both hosts and microbes under selection. In a previous study, we developed an agent-based forward-time computational framework to simulate the neutral evolution of host-associated microbial communities in a constant-sized, unstructured population of hosts. These neutral models allowed offspring to sample microbes randomly from parents and/or from the environment. Additionally, the environmental pool of available microbes was constituted by fixed and persistent microbial OTUs and by contributions from host individuals in the preceding generation. In this paper, we extend our neutral models to allow selection to operate on both hosts and microbes. We do this by constructing a phenome for each microbial OTU consisting of a sample of traits that influence host and microbial fitnesses independently. Microbial traits can influence the fitness of hosts ("host selection") and the fitness of microbes ("trait-mediated microbial selection"). Additionally, the fitness effects of traits on microbes can be modified by their hosts ("host-mediated microbial selection"). We simulate the effects of these three types of selection, individually or in combination, on microbiome diversities and the fitnesses of hosts and microbes over several thousand generations of hosts. We show that microbiome diversity is strongly influenced by selection acting on microbes. Selection acting on hosts only influences microbiome diversity when there is near-complete direct or indirect parental contribution to the microbiomes of offspring. Unsurprisingly, microbial fitness increases under microbial selection. Interestingly, when host selection operates, host fitness only increases under two conditions: (1) when there is a strong parental contribution to microbial communities or (2) in the absence of a strong

  12. Inflation model selection meets dark radiation

    Science.gov (United States)

    Tram, Thomas; Vallance, Robert; Vennin, Vincent

    2017-01-01

    We investigate how inflation model selection is affected by the presence of additional free-streaming relativistic degrees of freedom, i.e. dark radiation. We perform a full Bayesian analysis of both inflation parameters and cosmological parameters taking reheating into account self-consistently. We compute the Bayesian evidence for a few representative inflation scenarios in both the standard ΛCDM model and an extension including dark radiation parametrised by its effective number of relativistic species Neff. Using a minimal dataset (Planck low-l polarisation, temperature power spectrum and lensing reconstruction), we find that the observational status of most inflationary models is unchanged. The exceptions are potentials such as power-law inflation that predict large values for the scalar spectral index that can only be realised when Neff is allowed to vary. Adding baryon acoustic oscillations data and the B-mode data from BICEP2/Keck makes power-law inflation disfavoured, while adding local measurements of the Hubble constant H0 makes power-law inflation slightly favoured compared to the best single-field plateau potentials. This illustrates how the dark radiation solution to the H0 tension would have deep consequences for inflation model selection.

  13. The Formalization of the Business Process Modeling Goals

    Directory of Open Access Journals (Sweden)

    Ligita Bušinska

    2016-10-01

    Full Text Available In business process modeling the de facto standard BPMN has emerged. However, the applications of this notation have many subsets of elements and various extensions. Also, BPMN still coincides with many other modeling languages, forming a large set of available options for business process modeling languages and dialects. While, in general, the goal of modelers is a central notion in the choice of modeling languages and notations, in most researches that propose guidelines, techniques, and methods for business process modeling language evaluation and/or selection, the business process modeling goal is not formalized and not transparently taken into account. To overcome this gap, and to explicate and help to handle business process modeling complexity, the approach to formalize the business process modeling goal, and the supporting three dimensional business process modeling framework, are proposed.

  14. Advanced oxidation processes: overall models

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, M. [Univ. de los Andes, Escuela Basica de Ingenieria, La Hechicera, Merida (Venezuela); Curco, D.; Addardak, A.; Gimenez, J.; Esplugas, S. [Dept. de Ingenieria Quimica. Univ. de Barcelona, Barcelona (Spain)

    2003-07-01

    Modelling AOPs implies to consider all the steps included in the process, that means, mass transfer, kinetic (reaction) and luminic steps. In this way, recent works develop models which relate the global reaction rate to catalyst concentration and radiation absorption. However, the application of such models requires to know what is the controlling step for the overall process. In this paper, a simple method is explained which allows to determine the controlling step. Thus, it is assumed that reactor is divided in two hypothetical zones (dark and illuminated), and according to the experimental results, obtained by varying only the reaction volume, it can be decided if reaction occurs only in the illuminated zone or in the all reactor, including dark zone. The photocatalytic degradation of phenol, by using titania degussa P-25 as catalyst, is studied as reaction model. The preliminary results obtained are presented here, showing that it seems that, in this case, reaction only occurs in the illuminated zone of photoreactor. A model is developed to explain this behaviour. (orig.)

  15. Model for amorphous aggregation processes

    Science.gov (United States)

    Stranks, Samuel D.; Ecroyd, Heath; van Sluyter, Steven; Waters, Elizabeth J.; Carver, John A.; von Smekal, Lorenz

    2009-11-01

    The amorphous aggregation of proteins is associated with many phenomena, ranging from the formation of protein wine haze to the development of cataract in the eye lens and the precipitation of recombinant proteins during their expression and purification. While much literature exists describing models for linear protein aggregation, such as amyloid fibril formation, there are few reports of models which address amorphous aggregation. Here, we propose a model to describe the amorphous aggregation of proteins which is also more widely applicable to other situations where a similar process occurs, such as in the formation of colloids and nanoclusters. As first applications of the model, we have tested it against experimental turbidimetry data of three proteins relevant to the wine industry and biochemistry, namely, thaumatin, a thaumatinlike protein, and α -lactalbumin. The model is very robust and describes amorphous experimental data to a high degree of accuracy. Details about the aggregation process, such as shape parameters of the aggregates and rate constants, can also be extracted.

  16. Face Processing: Models For Recognition

    Science.gov (United States)

    Turk, Matthew A.; Pentland, Alexander P.

    1990-03-01

    The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.

  17. Steady-State Process Modelling

    DEFF Research Database (Denmark)

    2011-01-01

    illustrate the “equation oriented” approach as well as the “sequential modular” approach to solving complex flowsheets for steady state applications. The applications include the Williams-Otto plant, the hydrodealkylation (HDA) of toluene, conversion of ethylene to ethanol and a bio-ethanol process.......This chapter covers the basic principles of steady state modelling and simulation using a number of case studies. Two principal approaches are illustrated that develop the unit operation models from first principles as well as through application of standard flowsheet simulators. The approaches...

  18. Switching Processes in Queueing Models

    CERN Document Server

    Anisimov, Vladimir V

    2008-01-01

    Switching processes, invented by the author in 1977, is the main tool used in the investigation of traffic problems from automotive to telecommunications. The title provides a new approach to low traffic problems based on the analysis of flows of rare events and queuing models. In the case of fast switching, averaging principle and diffusion approximation results are proved and applied to the investigation of transient phenomena for wide classes of overloading queuing networks.  The book is devoted to developing the asymptotic theory for the class of switching queuing models which covers  mode

  19. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    of the selectivities of the constituent predicates. However, this independence assumption is more often than not wrong, and is considered to be the most common cause of sub-optimal query execution plans chosen by modern query optimizers. We take a step towards a principled and practical approach to performing...... cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss......Query optimizers rely on statistical models that succinctly describe the underlying data. Models are used to derive cardinality estimates for intermediate relations, which in turn guide the optimizer to choose the best query execution plan. The quality of the resulting plan is highly dependent...

  20. The Markowitz model for portfolio selection

    Directory of Open Access Journals (Sweden)

    MARIAN ZUBIA ZUBIAURRE

    2002-06-01

    Full Text Available Since its first appearance, The Markowitz model for portfolio selection has been a basic theoretical reference, opening several new development options. However, practically it has not been used among portfolio managers and investment analysts in spite of its success in the theoretical field. With our paper we would like to show how The Markowitz model may be of great help in real stock markets. Through an empirical study we want to verify the capability of Markowitz’s model to present portfolios with higher profitability and lower risk than the portfolio represented by IBEX-35 and IGBM indexes. Furthermore, we want to test suggested efficiency of these indexes as representatives of market theoretical-portfolio.

  1. Information criteria for astrophysical model selection

    CERN Document Server

    Liddle, A R

    2007-01-01

    Model selection is the problem of distinguishing competing models, perhaps featuring different numbers of parameters. The statistics literature contains two distinct sets of tools, those based on information theory such as the Akaike Information Criterion (AIC), and those on Bayesian inference such as the Bayesian evidence and Bayesian Information Criterion (BIC). The Deviance Information Criterion combines ideas from both heritages; it is readily computed from Monte Carlo posterior samples and, unlike the AIC and BIC, allows for parameter degeneracy. I describe the properties of the information criteria, and as an example compute them from WMAP3 data for several cosmological models. I find that at present the information theory and Bayesian approaches give significantly different conclusions from that data.

  2. Entropic Priors and Bayesian Model Selection

    CERN Document Server

    Brewer, Brendon J

    2009-01-01

    We demonstrate that the principle of maximum relative entropy (ME), used judiciously, can ease the specification of priors in model selection problems. The resulting effect is that models that make sharp predictions are disfavoured, weakening the usual Bayesian "Occam's Razor". This is illustrated with a simple example involving what Jaynes called a "sure thing" hypothesis. Jaynes' resolution of the situation involved introducing a large number of alternative "sure thing" hypotheses that were possible before we observed the data. However, in more complex situations, it may not be possible to explicitly enumerate large numbers of alternatives. The entropic priors formalism produces the desired result without modifying the hypothesis space or requiring explicit enumeration of alternatives; all that is required is a good model for the prior predictive distribution for the data. This idea is illustrated with a simple rigged-lottery example, and we outline how this idea may help to resolve a recent debate amongst ...

  3. Combustion Process Modelling and Control

    Directory of Open Access Journals (Sweden)

    Vladimír Maduda

    2007-10-01

    Full Text Available This paper deals with realization of combustion control system on programmable logic controllers. Control system design is based on analysis of the current state of combustion control systems in technological device of raw material processing area. Control system design is composed of two subsystems. First subsystem is represented by software system for measured data processing and for data processing from simulation of the combustion mathematical model. Outputs are parameters for setting of controller algorithms. Second subsystem consists from programme modules. The programme module is presented by specific control algorithm, for example proportional regulation, programmed proportional regulation, proportional regulation with correction on the oxygen in waste gas, and so on. According to the specific combustion control requirements it is possible built-up concrete control system by programme modules. The programme modules were programmed by Automation studio that is used for development, debugging and testing software for B&R controllers.

  4. Selective CO Methanation Catalysts for Fuel Processing Applications

    Energy Technology Data Exchange (ETDEWEB)

    Dagle, Robert A.; Wang, Yong; Xia, Guanguang G.; Strohm, James J.; Holladay, Jamie D.; Palo, Daniel R.

    2007-07-15

    Abstract Selective CO methanation as a strategy for CO removal in micro fuel processing applications was investigated over Ru-based catalysts. Ru loading, pretreatment and reduction conditions, and choice of support were shown to affect catalyst activity, selectivity, and stability. Even operating at a gas-hourly-space-velocity as high as 13,500 hr-1, a 3%Ru/Al2O3 catalyst was able to lower CO in a reformate to less than 100 ppm over a wide temperature range from 240oC to 285 oC, while keeping hydrogen consumption below 10%.

  5. The perfect photo book: hints for the image selection process

    Science.gov (United States)

    Fageth, Reiner; Schmidt-Sacht, Wulf

    2007-01-01

    An ever increasing amount of digital images are being captured. This increase is due to several reasons. People are afraid of not "capturing the moment" and pressing the shutter is not directly liked to costs as was the case with silver halide photography. This behaviour seems to be convenient but can result in a dilemma for the consumer. This paper presents tools designed to help the consumer overcome the time consuming image selection process while turning the chore of selecting the images for prints or placing them automatically into a photo book into a fun experience.

  6. Amorphous solid dispersions: Rational selection of a manufacturing process.

    Science.gov (United States)

    Vasconcelos, Teófilo; Marques, Sara; das Neves, José; Sarmento, Bruno

    2016-05-01

    Amorphous products and particularly amorphous solid dispersions are currently one of the most exciting areas in the pharmaceutical field. This approach presents huge potential and advantageous features concerning the overall improvement of drug bioavailability. Currently, different manufacturing processes are being developed to produce amorphous solid dispersions with suitable robustness and reproducibility, ranging from solvent evaporation to melting processes. In the present paper, laboratorial and industrial scale processes were reviewed, and guidelines for a rationale selection of manufacturing processes were proposed. This would ensure an adequate development (laboratorial scale) and production according to the good manufacturing practices (GMP) (industrial scale) of amorphous solid dispersions, with further implications on the process validations and drug development pipeline. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Spatial Fleming-Viot models with selection and mutation

    CERN Document Server

    Dawson, Donald A

    2014-01-01

    This book constructs a rigorous framework for analysing selected phenomena in evolutionary theory of populations arising due to the combined effects of migration, selection and mutation in a spatial stochastic population model, namely the evolution towards fitter and fitter types through punctuated equilibria. The discussion is based on a number of new methods, in particular multiple scale analysis, nonlinear Markov processes and their entrance laws, atomic measure-valued evolutions and new forms of duality (for state-dependent mutation and multitype selection) which are used to prove ergodic theorems in this context and are applicable for many other questions and renormalization analysis for a variety of phenomena (stasis, punctuated equilibrium, failure of naive branching approximations, biodiversity) which occur due to the combination of rare mutation, mutation, resampling, migration and selection and make it necessary to mathematically bridge the gap (in the limit) between time and space scales.

  8. Selecting an optimal mixed products using grey relationship model

    Directory of Open Access Journals (Sweden)

    Farshad Faezy Razi

    2013-06-01

    Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.

  9. Computational Process Modeling for Additive Manufacturing (OSU)

    Science.gov (United States)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  10. Selective CO methanation catalysts for fuel processing applications

    Energy Technology Data Exchange (ETDEWEB)

    Dagle, Robert A.; Wang, Yong; Xia, Guan-Guang; Strohm, James J.; Holladay, Jamelyn [Pacific Northwest National Laboratory, 902 Battle Boulevard, P.O. Box 999, Richland, WA 99352 (United States); Palo, Daniel R. [Pacific Northwest National Laboratory, 902 Battle Boulevard, P.O. Box 999, Richland, WA 99352 (United States); Microproducts Breakthrough Institute, P.O. Box 2330, Corvallis, OR 97339 (United States)

    2007-07-15

    Selective CO methanation as a strategy for CO removal in fuel processing applications was investigated over Ru-based catalysts. Ru metal loading and crystallite size were shown to affect catalyst activity and selectivity. Even operating at a gas-hourly-space-velocity as high as 13,500 h{sup -1}, a 3% Ru/Al{sub 2}O{sub 3} catalyst with a 34.2 nm crystallite was shown to be capable of reducing CO in a reformate to less than 100 ppm over a wide temperature range from 240 to 280 C, while keeping hydrogen consumption below 10%. We present the effects of metal loading, preparation method, and crystallite size on performance for Ru-based catalysts in the selective methanation of CO in the presence of H{sub 2} and CO{sub 2}. (author)

  11. Analysis of Using Resources in Business Process Modeling and Simulation

    Directory of Open Access Journals (Sweden)

    Vasilecas Olegas

    2014-12-01

    Full Text Available One of the key purposes of Business Process Model and Notation (BPMN is to support graphical representation of the process model. However, such models have a lack of support for the graphical representation of resources, whose processes are used during simulation or execution of process instance. The paper analyzes different methods and their extensions for resource modeling. Further, this article presents a selected set of resource properties that are relevant for resource modeling. The paper proposes an approach that explains how to use the selected set of resource properties for extension of process modeling using BPMN and simulation tools. They are based on BPMN, where business process instances use resources in a concurrency manner.

  12. Selection processes in a citrus hybrid population using RAPD markers

    Directory of Open Access Journals (Sweden)

    Oliveira Roberto Pedroso de

    2003-01-01

    Full Text Available The objective of this work was to evaluate the processes of selection in a citrus hybrid population using segregation analysis of RAPD markers. The segregation of 123 RAPD markers between 'Cravo' mandarin (Citrus reticulata Blanco and 'Pêra' sweet orange (C. sinensis (L. Osbeck was analysed in a F1 progeny of 94 hybrids. Genetic composition, diversity, heterozygosity, differences in chromosomal structure and the presence of deleterious recessive genes are discussed based on the segregation ratios obtained. A high percentage of markers had a skeweness of the 1:1 expected segregation ratio in the F1 population. Many markers showed a 3:1 segregation ratio in both varieties and 1:3 in 'Pêra' sweet orange, probably due to directional selection processes. The distribution analysis of the frequencies of the segregant markers in a hybrid population is a simple method which allows a better understanding of the genetics of citrus group.

  13. Laser Process for Selective Emitter Silicon Solar Cells

    Directory of Open Access Journals (Sweden)

    G. Poulain

    2012-01-01

    Full Text Available Selective emitter solar cells can provide a significant increase in conversion efficiency. However current approaches need many technological steps and alignment procedures. This paper reports on a preliminary attempt to reduce the number of processing steps and therefore the cost of selective emitter cells. In the developed procedure, a phosphorous glass covered with silicon nitride acts as the doping source. A laser is used to open locally the antireflection coating and at the same time achieve local phosphorus diffusion. In this process the standard chemical etching of the phosphorous glass is avoided. Sheet resistance variation from 100 Ω/sq to 40 Ω/sq is demonstrated with a nanosecond UV laser. Numerical simulation of the laser-matter interaction is discussed to understand the dopant diffusion efficiency. Preliminary solar cells results show a 0.5% improvement compared with a homogeneous emitter structure.

  14. Improving randomness characterization through Bayesian model selection

    CERN Document Server

    R., Rafael Díaz-H; Martínez, Alí M Angulo; U'Ren, Alfred B; Hirsch, Jorge G; Marsili, Matteo; Castillo, Isaac Pérez

    2016-01-01

    Nowadays random number generation plays an essential role in technology with important applications in areas ranging from cryptography, which lies at the core of current communication protocols, to Monte Carlo methods, and other probabilistic algorithms. In this context, a crucial scientific endeavour is to develop effective methods that allow the characterization of random number generators. However, commonly employed methods either lack formality (e.g. the NIST test suite), or are inapplicable in principle (e.g. the characterization derived from the Algorithmic Theory of Information (ATI)). In this letter we present a novel method based on Bayesian model selection, which is both rigorous and effective, for characterizing randomness in a bit sequence. We derive analytic expressions for a model's likelihood which is then used to compute its posterior probability distribution. Our method proves to be more rigorous than NIST's suite and the Borel-Normality criterion and its implementation is straightforward. We...

  15. Ancilla-less selective and efficient quantum process tomography

    CERN Document Server

    Schmiegelow, Christian Tomás; Larotonda, Miguel Antonio; Paz, Juan Pablo

    2011-01-01

    Several methods, known as Quantum Process Tomography, are available to characterize the evolution of quantum systems, a task of crucial importance. However, their complexity dramatically increases with the size of the system. Here we present the theory describing a new type of method for quantum process tomography. We describe an algorithm that can be used to selectively estimate any parameter characterizing a quantum process. Unlike any of its predecessors this new quantum tomographer combines two main virtues: it requires investing a number of physical resources scaling polynomially with the number of qubits and at the same time it does not require any ancillary resources. We present the results of the first photonic implementation of this quantum device, characterizing quantum processes affecting two qubits encoded in heralded single photons. Even for this small system our method displays clear advantages over the other existing ones.

  16. SELECTION AND PROMOTION PROCESS TO SUPERVISORY POSITIONS IN MEXICO, 2015

    Directory of Open Access Journals (Sweden)

    José Guadalupe Hernández López

    2015-12-01

    Full Text Available In Mexico it is starting a process of selection and promotion of teachers to supervisory positions through what has been called competitive examinations. This competition, derived from the Education Reform 2013, is justified by the alleged finding the best teachers to fill them. As a "new" process in the Mexican education system has led to a series of disputes since that examination was confined to the application and resolution of a standardized test consisting of multiple-choice questions applied in a session of eight hours which it determines whether a teacher is qualified or not qualified for the job.

  17. Selective blockade of microRNA processing by Lin-28

    OpenAIRE

    Viswanathan, Srinivas R.; Daley, George Q.; Gregory, Richard I.

    2008-01-01

    MicroRNAs (miRNAs) play critical roles in development, and dysregulation of miRNA expression has been observed in human malignancies. Recent evidence suggests that the processing of several primary miRNA transcripts (pri-miRNAs) is blocked post-transcriptionally in embryonic stem (ES) cells, embryonal carcinoma (EC) cells, and primary tumors. Here we show that Lin-28, a developmentally regulated RNA-binding protein, selectively blocks the processing of pri-let-7 miRNAs in embryonic cells. Usi...

  18. Temperature fields in machining processes and heat transfer models

    Energy Technology Data Exchange (ETDEWEB)

    Palazzo, G.; Pasquino, R. [University of Salerno Via Ponte Donmelillo, Fisciano (Italy). Department of Mechanical Engineering; Bellomo, N. [Politecnico Torino Corso Duca degli Abruzzi, Torino (Italy). Department of Mathematics

    2002-07-01

    This paper deals with the modelling of the heat transfer process with special attention to the characterization of the thermal field during turning processes. Specifically, the measurement of the thermal field and the selection of the proper heat transfer models are dealt with. The analysis is developed in view of the solution of direct and inverse problems. (author)

  19. ModelOMatic: fast and automated model selection between RY, nucleotide, amino acid, and codon substitution models.

    Science.gov (United States)

    Whelan, Simon; Allen, James E; Blackburne, Benjamin P; Talavera, David

    2015-01-01

    Molecular phylogenetics is a powerful tool for inferring both the process and pattern of evolution from genomic sequence data. Statistical approaches, such as maximum likelihood and Bayesian inference, are now established as the preferred methods of inference. The choice of models that a researcher uses for inference is of critical importance, and there are established methods for model selection conditioned on a particular type of data, such as nucleotides, amino acids, or codons. A major limitation of existing model selection approaches is that they can only compare models acting upon a single type of data. Here, we extend model selection to allow comparisons between models describing different types of data by introducing the idea of adapter functions, which project aggregated models onto the originally observed sequence data. These projections are implemented in the program ModelOMatic and used to perform model selection on 3722 families from the PANDIT database, 68 genes from an arthropod phylogenomic data set, and 248 genes from a vertebrate phylogenomic data set. For the PANDIT and arthropod data, we find that amino acid models are selected for the overwhelming majority of alignments; with progressively smaller numbers of alignments selecting codon and nucleotide models, and no families selecting RY-based models. In contrast, nearly all alignments from the vertebrate data set select codon-based models. The sequence divergence, the number of sequences, and the degree of selection acting upon the protein sequences may contribute to explaining this variation in model selection. Our ModelOMatic program is fast, with most families from PANDIT taking fewer than 150 s to complete, and should therefore be easily incorporated into existing phylogenetic pipelines. ModelOMatic is available at https://code.google.com/p/modelomatic/.

  20. TIME SERIES FORECASTING WITH MULTIPLE CANDIDATE MODELS: SELECTING OR COMBINING?

    Institute of Scientific and Technical Information of China (English)

    YU Lean; WANG Shouyang; K. K. Lai; Y.Nakamori

    2005-01-01

    Various mathematical models have been commonly used in time series analysis and forecasting. In these processes, academic researchers and business practitioners often come up against two important problems. One is whether to select an appropriate modeling approach for prediction purposes or to combine these different individual approaches into a single forecast for the different/dissimilar modeling approaches. Another is whether to select the best candidate model for forecasting or to mix the various candidate models with different parameters into a new forecast for the same/similar modeling approaches. In this study, we propose a set of computational procedures to solve the above two issues via two judgmental criteria. Meanwhile, in view of the problems presented in the literature, a novel modeling technique is also proposed to overcome the drawbacks of existing combined forecasting methods. To verify the efficiency and reliability of the proposed procedure and modeling technique, the simulations and real data examples are conducted in this study.The results obtained reveal that the proposed procedure and modeling technique can be used as a feasible solution for time series forecasting with multiple candidate models.

  1. Estimation of time-varying selectivity in stock assessments using state-space models

    DEFF Research Database (Denmark)

    Nielsen, Anders; Berg, Casper Willestofte

    2014-01-01

    -varying selectivity pattern. The fishing mortality rates are considered (possibly correlated) stochastic processes, and the corresponding process variances are estimated within the model. The model is applied to North Sea cod and it is verified from simulations that time-varying selectivity can be estimated...

  2. Principles of polymer processing modelling

    Directory of Open Access Journals (Sweden)

    Agassant Jean-François

    2016-01-01

    Full Text Available Polymer processing involves three thermo-mechanical stages: Plastication of solid polymer granules or powder to an homogeneous fluid which is shaped under pressure in moulds or dies and finally cooled and eventually drawn to obtain the final plastic part. Physical properties of polymers (high viscosity, non-linear rheology, low thermal diffusivity as well as the complex shape of most plastic parts make modelling a challenge. Several examples (film blowing extrusion dies, injection moulding, blow moulding are presented and discussed.

  3. Statistical Inference for Point Process Models of Rainfall

    Science.gov (United States)

    Smith, James A.; Karr, Alan F.

    1985-01-01

    In this paper we develop maximum likelihood procedures for parameter estimation and model selection that apply to a large class of point process models that have been used to model rainfall occurrences, including Cox processes, Neyman-Scott processes, and renewal processes. The statistical inference procedures are based on the stochastic intensity λ(t) = lims→0,s>0 (1/s)E[N(t + s) - N(t)|N(u), u process is shown to have a simple expression in terms of the stochastic intensity. The main result of this paper is a recursive procedure for computing stochastic intensities; the procedure is applicable to a broad class of point process models, including renewal Cox process with Markovian intensity processes and an important class of Neyman-Scott processes. The model selection procedure we propose, which is based on likelihood ratios, allows direct comparison of two classes of point processes to determine which provides a better model for a given data set. The estimation and model selection procedures are applied to two data sets of simulated Cox process arrivals and a data set of daily rainfall occurrences in the Potomac River basin.

  4. Inflation Model Selection meets Dark Radiation

    CERN Document Server

    Tram, Thomas; Vennin, Vincent

    2016-01-01

    We investigate how inflation model selection is affected by the presence of additional free-streaming relativistic degrees of freedom, i.e. dark radiation. We perform a full Bayesian analysis of both inflation parameters and cosmological parameters taking reheating into account self-consistently. We compute the Bayesian evidence for a few representative inflation scenarios in both the standard $\\Lambda\\mathrm{CDM}$ model and an extension including dark radiation parametrised by its effective number of relativistic species $N_\\mathrm{eff}$. We find that the observational status of most inflationary models is unchanged, with the exception of potentials such as power-law inflation that predict a value for the scalar spectral index that is too large in $\\Lambda\\mathrm{CDM}$ but which can be accommodated when $N_\\mathrm{eff}$ is allowed to vary. In this case, cosmic microwave background data indicate that power-law inflation is one of the best models together with plateau potentials. However, contrary to plateau p...

  5. Integrated modelling in materials and process technology

    DEFF Research Database (Denmark)

    Hattel, Jesper Henri

    2008-01-01

    Integrated modelling of entire process sequences and the subsequent in-service conditions, and multiphysics modelling of the single process steps are areas that increasingly support optimisation of manufactured parts. In the present paper, three different examples of modelling manufacturing...... processes from the viewpoint of combined materials and process modelling are presented: solidification of thin walled ductile cast iron, integrated modelling of spray forming and multiphysics modelling of friction stir welding. The fourth example describes integrated modelling applied to a failure analysis...

  6. A Review of Process Modeling Language Paradigms

    Institute of Scientific and Technical Information of China (English)

    MA Qin-hai; GUAN Zhi-min; LI Ying; ZHAO Xi-nan

    2002-01-01

    Process representation or modeling plays an important role in business process engineering.Process modeling languages can be evaluated by the extent to which they provide constructs useful for representing and reasoning about the aspects of a process, and subsequently are chosen for a certain purpose.This paper reviews process modeling language paradigms and points out their advantages and disadvantages.

  7. Models of cultural niche construction with selection and assortative mating.

    Science.gov (United States)

    Creanza, Nicole; Fogarty, Laurel; Feldman, Marcus W

    2012-01-01

    Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.

  8. Models of cultural niche construction with selection and assortative mating.

    Directory of Open Access Journals (Sweden)

    Nicole Creanza

    Full Text Available Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.

  9. Python Program to Select HII Region Models

    Science.gov (United States)

    Miller, Clare; Lamarche, Cody; Vishwas, Amit; Stacey, Gordon J.

    2016-01-01

    HII regions are areas of singly ionized Hydrogen formed by the ionizing radiaiton of upper main sequence stars. The infrared fine-structure line emissions, particularly Oxygen, Nitrogen, and Neon, can give important information about HII regions including gas temperature and density, elemental abundances, and the effective temperature of the stars that form them. The processes involved in calculating this information from observational data are complex. Models, such as those provided in Rubin 1984 and those produced by Cloudy (Ferland et al, 2013) enable one to extract physical parameters from observational data. However, the multitude of search parameters can make sifting through models tedious. I digitized Rubin's models and wrote a Python program that is able to take observed line ratios and their uncertainties and find the Rubin or Cloudy model that best matches the observational data. By creating a Python script that is user friendly and able to quickly sort through models with a high level of accuracy, this work increases efficiency and reduces human error in matching HII region models to observational data.

  10. Continuous-Time Mean-Variance Portfolio Selection under the CEV Process

    OpenAIRE

    Hui-qiang Ma

    2014-01-01

    We consider a continuous-time mean-variance portfolio selection model when stock price follows the constant elasticity of variance (CEV) process. The aim of this paper is to derive an optimal portfolio strategy and the efficient frontier. The mean-variance portfolio selection problem is formulated as a linearly constrained convex program problem. By employing the Lagrange multiplier method and stochastic optimal control theory, we obtain the optimal portfolio strategy and mean-variance effici...

  11. Scientific basis for process and catalyst design in the selective oxidation of methane to formaldehyde.

    Science.gov (United States)

    Arena, Francesco; Parmaliana, Adolfo

    2003-12-01

    The mechanism and kinetics of the gas-phase selective oxidation of methane to formaldehyde (MPO) are revised in the general context of catalytic oxidations. In agreement with ab initio calculations of the energy barrier for the activation of methane on transition metal oxide complexes, a formal Langmuir-Hinshelwood kinetic model is proposed which accounts for the "steady-state" conditions and activity-selectivity pattern of MPO catalysts, providing an original support to process design and catalyst development.

  12. Business Process Modelling for Measuring Quality

    NARCIS (Netherlands)

    Heidari, F.; Loucopoulos, P.; Brazier, F.M.

    2013-01-01

    Business process modelling languages facilitate presentation, communication and analysis of business processes with different stakeholders. This paper proposes an approach that drives specification and measurement of quality requirements and in doing so relies on business process models as

  13. Business Process Modelling for Measuring Quality

    NARCIS (Netherlands)

    Heidari, F.; Loucopoulos, P.; Brazier, F.M.

    2013-01-01

    Business process modelling languages facilitate presentation, communication and analysis of business processes with different stakeholders. This paper proposes an approach that drives specification and measurement of quality requirements and in doing so relies on business process models as represent

  14. The signal selection and processing method for polarization measurement radar

    Institute of Scientific and Technical Information of China (English)

    CHANG YuLiang; WANG XueSong; LI YongZhen; XIAO ShunPing

    2009-01-01

    Based on the ambiguity function, a novel signal processing method for the polarization measurement radar is developed. One advantage of this method is that the two orthogonal polarized signals do not have to be perpendicular to each other, which is required by traditional methods. The error due to the correlation of the two transmitting signals in the traditional method, can be reduced by this new approach. A concept called ambiguity function matrix (AFM) is introduced based on this method. AFM is a promising tool for the signal selection and design in the polarization scattering matrix measurement. The waveforms of the polarimetric radar are categorized and analyzed based on AFM in this paper. The signal processing flow of this method is explained. And the polarization scattering matrix measurement performance is testified by simulation. Furthermore, this signal processing method can be used in the inter-pulse interval measurement technique as well as in the instantaneous measurement technique.

  15. Optimization of post combustion carbon capture process-solvent selection

    Directory of Open Access Journals (Sweden)

    Udara S. P. R. Arachchige, Muhammad Mohsin, Morten C. Melaaen

    2012-01-01

    Full Text Available The reduction of the main energy requirements in the CO2 capture process that is re-boiler duty in stripper section is important. Present study was focused on selection of better solvent concentration and CO2 lean loading for CO2 capture process. Both coal and gas fired power plant flue gases were considered to develop the capture plant with different efficiencies. Solvent concentration was varied from 25 to 40 (w/w % and CO2 lean loading was varied from 0.15 to 0.30 (mol CO2/mol MEA for 70-95 (mol % CO2 removal efficiencies. The optimum specifications for coal and gas processes such as MEA concentration, CO2 lean loading, and solvent inlet flow rate were obtained.

  16. High-dimensional model estimation and model selection

    CERN Document Server

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  17. Fuzzy modelling for selecting headgear types.

    Science.gov (United States)

    Akçam, M Okan; Takada, Kenji

    2002-02-01

    The purpose of this study was to develop a computer-assisted inference model for selecting appropriate types of headgear appliance for orthodontic patients and to investigate its clinical versatility as a decision-making aid for inexperienced clinicians. Fuzzy rule bases were created for degrees of overjet, overbite, and mandibular plane angle variables, respectively, according to subjective criteria based on the clinical experience and knowledge of the authors. The rules were then transformed into membership functions and the geometric mean aggregation was performed to develop the inference model. The resultant fuzzy logic was then tested on 85 cases in which the patients had been diagnosed as requiring headgear appliances. Eight experienced orthodontists judged each of the cases, and decided if they 'agreed', 'accepted', or 'disagreed' with the recommendations of the computer system. Intra-examiner agreements were investigated using repeated judgements of a set of 30 orthodontic cases and the kappa statistic. All of the examiners exceeded a kappa score of 0.7, allowing them to participate in the test run of the validity of the proposed inference model. The examiners' agreement with the system's recommendations was evaluated statistically. The average satisfaction rate of the examiners was 95.6 per cent and, for 83 out of the 85 cases, 97.6 per cent. The majority of the examiners (i.e. six or more out of the eight) were satisfied with the recommendations of the system. Thus, the usefulness of the proposed inference logic was confirmed.

  18. Decision making software for effective selection of treatment train alternative for wastewater using analytical hierarchy process.

    Science.gov (United States)

    Prasad, A D; Tembhurkar, A R

    2013-10-01

    Proper selection of treatment process and synthesis of treatment train is complex engineering activity requires crucial decision making during planning and designing of any Wastewater Treatment Plant (WWTP). Earlier studies on process selection mainly considered cost as the most important selection criteria and number of studies focused on cost optimization models using dynamic programming, geometric programming and nonlinear programming. However, it has been noticed that traditional cost analysis alone cannot be applied to evaluate Treatment Train (TT) alternatives, as number of important non-tangible factors cannot be easily expressed in monetary units. Recently researches focus on use of multi-criteria technique for selection of treatment process. AHP provides a powerful tool for multi-hierarchy and multi-variable system overcoming limitation of traditional techniques. The AHP model designed to facilitate proper decision making and reduce the margin of errors during optimization due to number of parameters in the hierarchy levels has been used in this study. About 14 important factors and 13 sub factors were identified for the selection of treatment alternatives for wastewater and sludge stream although cost is one of the most important selection criteria. The present paper provides details of developing a soft-tool called "ProSelArt" using an AHP model aiding for proper decision making.

  19. SLAM: A Connectionist Model for Attention in Visual Selection Tasks.

    Science.gov (United States)

    Phaf, R. Hans; And Others

    1990-01-01

    The SeLective Attention Model (SLAM) performs visual selective attention tasks and demonstrates that object selection and attribute selection are both necessary and sufficient for visual selection. The SLAM is described, particularly with regard to its ability to represent an individual subject performing filtering tasks. (TJH)

  20. Race, Self-Selection, and the Job Search Process1

    Science.gov (United States)

    Pager, Devah; Pedulla, David S.

    2015-01-01

    While existing research has documented persistent barriers facing African American job seekers, far less research has questioned how job seekers respond to this reality. Do minorities self-select into particular segments of the labor market to avoid discrimination? Such questions have remained unanswered due to the lack of data available on the positions to which job seekers apply. Drawing on two original datasets with application-specific information, we find little evidence that blacks target or avoid particular job types. Rather, blacks cast a wider net in their search than similarly situated whites, including a greater range of occupational categories and characteristics in their pool of job applications. Finally, we show that perceptions of discrimination are associated with increased search breadth, suggesting that broad search among African Americans represents an adaptation to labor market discrimination. Together these findings provide novel evidence on the role of race and self-selection in the job search process. PMID:26046224

  1. Race, self-selection, and the job search process.

    Science.gov (United States)

    Pager, Devah; Pedulla, David S

    2015-01-01

    While existing research has documented persistent barriers facing African-American job seekers, far less research has questioned how job seekers respond to this reality. Do minorities self-select into particular segments of the labor market to avoid discrimination? Such questions have remained unanswered due to the lack of data available on the positions to which job seekers apply. Drawing on two original data sets with application-specific information, we find little evidence that blacks target or avoid particular job types. Rather, blacks cast a wider net in their search than similarly situated whites, including a greater range of occupational categories and characteristics in their pool of job applications. Additionally, we show that perceptions of discrimination are associated with increased search breadth, suggesting that broad search among African-Americans represents an adaptation to labor market discrimination. Together these findings provide novel evidence on the role of race and self-selection in the job search process.

  2. PASS-GP: Predictive active set selection for Gaussian processes

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2010-01-01

    to the active set selection strategy and marginal likelihood optimization on the active set. We make extensive tests on the USPS and MNIST digit classification databases with and without incorporating invariances, demonstrating that we can get state-of-the-art results (e.g.0.86% error on MNIST) with reasonable......We propose a new approximation method for Gaussian process (GP) learning for large data sets that combines inline active set selection with hyperparameter optimization. The predictive probability of the label is used for ranking the data points. We use the leave-one-out predictive probability...... available in GPs to make a common ranking for both active and inactive points, allowing points to be removed again from the active set. This is important for keeping the complexity down and at the same time focusing on points close to the decision boundary. We lend both theoretical and empirical support...

  3. Selective target processing: perceptual load or distractor salience?

    Science.gov (United States)

    Eltiti, Stacy; Wallace, Denise; Fox, Elaine

    2005-07-01

    Perceptual load theory (Lavie, 1995) states that participants cannot engage in focused attention when shown displays containing a low perceptual load, because attentional resources are not exhausted, whereas in high-load displays attention is always focused, because attentional resources are exhausted. An alternative "salience" hypothesis holds that the salience of distractors and not perceptual load per se determines selective attention. Three experiments were conducted to investigate the influence that target and distractor onsets and offsets have on selective processing in a standard interference task. Perceptual load theory predicts that, regardless of target or distractor presentation (onset or offset), interference from ignored distractors should occur in low-load displays only. In contrast, the salience hypothesis predicts that interference should occur when the distractor appears as an onset and would occur for distractor offsets only when the target was also an offset. Interference may even occur in highload displays if the distractor is more salient. The results supported the salience hypothesis.

  4. Manufacturing process and material selection in concurrent collaborative design of MEMS devices

    Science.gov (United States)

    Zha, Xuan F.; Du, H.

    2003-09-01

    In this paper we present knowledge of an intensive approach and system for selecting suitable manufacturing processes and materials for microelectromechanical systems (MEMS) devices in concurrent collaborative design environment. In the paper, fundamental issues on MEMS manufacturing process and material selection such as concurrent design framework, manufacturing process and material hierarchies, and selection strategy are first addressed. Then, a fuzzy decision support scheme for a multi-criteria decision-making problem is proposed for estimating, ranking and selecting possible manufacturing processes, materials and their combinations. A Web-based prototype advisory system for the MEMS manufacturing process and material selection, WebMEMS-MASS, is developed based on the client-knowledge server architecture and framework to help the designer find good processes and materials for MEMS devices. The system, as one of the important parts of an advanced simulation and modeling tool for MEMS design, is a concept level process and material selection tool, which can be used as a standalone application or a Java applet via the Web. The running sessions of the system are inter-linked with webpages of tutorials and reference pages to explain the facets, fabrication processes and material choices, and calculations and reasoning in selection are performed using process capability and material property data from a remote Web-based database and interactive knowledge base that can be maintained and updated via the Internet. The use of the developed system including operation scenario, use support, and integration with an MEMS collaborative design system is presented. Finally, an illustration example is provided.

  5. Structure and selection in an autocatalytic binary polymer model

    DEFF Research Database (Denmark)

    Tanaka, Shinpei; Fellermann, Harold; Rasmussen, Steen

    2014-01-01

    An autocatalytic binary polymer system is studied as an abstract model for a chemical reaction network capable to evolve. Due to autocatalysis, long polymers appear spontaneously and their concentration is shown to be maintained at the same level as that of monomers. When the reaction starts from....... Stability, fluctuations, and dynamic selection mechanisms are investigated for the involved self-organizing processes. Copyright (C) EPLA, 2014......An autocatalytic binary polymer system is studied as an abstract model for a chemical reaction network capable to evolve. Due to autocatalysis, long polymers appear spontaneously and their concentration is shown to be maintained at the same level as that of monomers. When the reaction starts from...

  6. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  7. QOS Aware Formalized Model for Semantic Web Service Selection

    Directory of Open Access Journals (Sweden)

    Divya Sachan

    2014-10-01

    Full Text Available Selecting the most relevant Web Service according to a client requirement is an onerous task, as innumerous number of functionally same Web Services(WS are listed in UDDI registry. WS are functionally same but their Quality and performance varies as per service providers. A web Service Selection Process involves two major points: Recommending the pertinent Web Service and avoiding unjustifiable web service. The deficiency in keyword based searching is that it doesn’t handle the client request accurately as keyword may have ambiguous meaning on different scenarios. UDDI and search engines all are based on keyword search, which are lagging behind on pertinent Web service selection. So the search mechanism must be incorporated with the Semantic behavior of Web Services. In order to strengthen this approach, the proposed model is incorporated with Quality of Services (QoS based Ranking of semantic web services.

  8. Exploratory Bayesian model selection for serial genetics data.

    Science.gov (United States)

    Zhao, Jing X; Foulkes, Andrea S; George, Edward I

    2005-06-01

    Characterizing the process by which molecular and cellular level changes occur over time will have broad implications for clinical decision making and help further our knowledge of disease etiology across many complex diseases. However, this presents an analytic challenge due to the large number of potentially relevant biomarkers and the complex, uncharacterized relationships among them. We propose an exploratory Bayesian model selection procedure that searches for model simplicity through independence testing of multiple discrete biomarkers measured over time. Bayes factor calculations are used to identify and compare models that are best supported by the data. For large model spaces, i.e., a large number of multi-leveled biomarkers, we propose a Markov chain Monte Carlo (MCMC) stochastic search algorithm for finding promising models. We apply our procedure to explore the extent to which HIV-1 genetic changes occur independently over time.

  9. SELECTION AND PRELIMINARY EVALUATION OF ALTERNATIVE REDUCTANTS FOR SRAT PROCESSING

    Energy Technology Data Exchange (ETDEWEB)

    Stone, M.; Pickenheim, B.; Peeler, D.

    2009-06-30

    Defense Waste Processing Facility - Engineering (DWPF-E) has requested the Savannah River National Laboratory (SRNL) to perform scoping evaluations of alternative flowsheets with the primary focus on alternatives to formic acid during Chemical Process Cell (CPC) processing. The reductants shown below were selected for testing during the evaluation of alternative reductants for Sludge Receipt and Adjustment Tank (SRAT) processing. The reductants fall into two general categories: reducing acids and non-acidic reducing agents. Reducing acids were selected as direct replacements for formic acid to reduce mercury in the SRAT, to acidify the sludge, and to balance the melter REDuction/OXidation potential (REDOX). Non-acidic reductants were selected as melter reductants and would not be able to reduce mercury in the SRAT. Sugar was not tested during this scoping evaluation as previous work has already been conducted on the use of sugar with DWPF feeds. Based on the testing performed, the only viable short-term path to mitigating hydrogen generation in the CPC is replacement of formic acid with a mixture of glycolic and formic acids. An experiment using glycolic acid blended with formic on an 80:20 molar basis was able to reduce mercury, while also targeting a predicted REDuction/OXidation (REDOX) of 0.2 expressed as Fe{sup 2+}/{Sigma}Fe. Based on this result, SRNL recommends performing a complete CPC demonstration of the glycolic/formic acid flowsheet followed by a design basis development and documentation. Of the options tested recently and in the past, nitric/glycolic/formic blended acids has the potential for near term implementation in the existing CPC equipment providing rapid throughput improvement. Use of a non-acidic reductant is recommended only if the processing constraints to remove mercury and acidify the sludge acidification are eliminated. The non-acidic reductants (e.g. sugar) will not reduce mercury during CPC processing and sludge acidification would

  10. Temporally selective processing of communication signals by auditory midbrain neurons

    DEFF Research Database (Denmark)

    Elliott, Taffeta M; Christensen-Dalsgaard, Jakob; Kelley, Darcy B

    2011-01-01

    of auditory neurons in the laminar nucleus of the torus semicircularis (TS) of X. laevis specializes in encoding vocalization click rates. We recorded single TS units while pure tones, natural calls, and synthetic clicks were presented directly to the tympanum via a vibration-stimulation probe. Synthesized...... click rates ranged from 4 to 50 Hz, the rate at which the clicks begin to overlap. Frequency selectivity and temporal processing were characterized using response-intensity curves, temporal-discharge patterns, and autocorrelations of reduplicated responses to click trains. Characteristic frequencies...

  11. Pyrochemical and Dry Processing Methods Program. A selected bibliography

    Energy Technology Data Exchange (ETDEWEB)

    McDuffie, H.F.; Smith, D.H.; Owen, P.T.

    1979-03-01

    This selected bibliography with abstracts was compiled to provide information support to the Pyrochemical and Dry Processing Methods (PDPM) Program sponsored by DOE and administered by the Argonne National Laboratory. Objectives of the PDPM Program are to evaluate nonaqueous methods of reprocessing spent fuel as a route to the development of proliferation-resistant and diversion-resistant methods for widespread use in the nuclear industry. Emphasis was placed on the literature indexed in the ERDA--DOE Energy Data Base (EDB). The bibliography includes indexes to authors, subject descriptors, EDB subject categories, and titles.

  12. Board Directors' Selection Process Following a Gender Quota

    DEFF Research Database (Denmark)

    Sigurjonsson, Olaf; Arnardottir, Audur Arna

    The 2008 financial crisis in Iceland was triggered by poor governance of three of the country’s major banks and resulted in new corporate governance regulations including a 40% gender quota for the boards of all state-owned enterprises, publicly traded enterprises, and public and private limited......-quota selection of new board directors as well as the attitudes of board members towards the quota and perceptions of the effect of quota on processes. We incorporate a dual qualitative and quantitative methodology with in-depth interviews with 20 board directors and chairs, and a survey of 260 directors who...

  13. Hidden Markov Model for Stock Selection

    Directory of Open Access Journals (Sweden)

    Nguyet Nguyen

    2015-10-01

    Full Text Available The hidden Markov model (HMM is typically used to predict the hidden regimes of observation data. Therefore, this model finds applications in many different areas, such as speech recognition systems, computational molecular biology and financial market predictions. In this paper, we use HMM for stock selection. We first use HMM to make monthly regime predictions for the four macroeconomic variables: inflation (consumer price index (CPI, industrial production index (INDPRO, stock market index (S&P 500 and market volatility (VIX. At the end of each month, we calibrate HMM’s parameters for each of these economic variables and predict its regimes for the next month. We then look back into historical data to find the time periods for which the four variables had similar regimes with the forecasted regimes. Within those similar periods, we analyze all of the S&P 500 stocks to identify which stock characteristics have been well rewarded during the time periods and assign scores and corresponding weights for each of the stock characteristics. A composite score of each stock is calculated based on the scores and weights of its features. Based on this algorithm, we choose the 50 top ranking stocks to buy. We compare the performances of the portfolio with the benchmark index, S&P 500. With an initial investment of $100 in December 1999, over 15 years, in December 2014, our portfolio had an average gain per annum of 14.9% versus 2.3% for the S&P 500.

  14. Developing engineering processes through integrated modelling of product and process

    DEFF Research Database (Denmark)

    Nielsen, Jeppe Bjerrum; Hvam, Lars

    2012-01-01

    This article aims at developing an operational tool for integrated modelling of product assortments and engineering processes in companies making customer specific products. Integrating a product model in the design of engineering processes will provide a deeper understanding of the engineering...... activities as well as insight into how product features affect the engineering processes. The article suggests possible ways of integrating models of products with models of engineering processes. The models have been tested and further developed in an action research study carried out in collaboration...

  15. Model Order Selection Rules for Covariance Structure Classification in Radar

    Science.gov (United States)

    Carotenuto, Vincenzo; De Maio, Antonio; Orlando, Danilo; Stoica, Petre

    2017-10-01

    The adaptive classification of the interference covariance matrix structure for radar signal processing applications is addressed in this paper. This represents a key issue because many detection architectures are synthesized assuming a specific covariance structure which may not necessarily coincide with the actual one due to the joint action of the system and environment uncertainties. The considered classification problem is cast in terms of a multiple hypotheses test with some nested alternatives and the theory of Model Order Selection (MOS) is exploited to devise suitable decision rules. Several MOS techniques, such as the Akaike, Takeuchi, and Bayesian information criteria are adopted and the corresponding merits and drawbacks are discussed. At the analysis stage, illustrating examples for the probability of correct model selection are presented showing the effectiveness of the proposed rules.

  16. Autoregressive model selection with simultaneous sparse coefficient estimation

    CERN Document Server

    Sang, Hailin

    2011-01-01

    In this paper we propose a sparse coefficient estimation procedure for autoregressive (AR) models based on penalized conditional maximum likelihood. The penalized conditional maximum likelihood estimator (PCMLE) thus developed has the advantage of performing simultaneous coefficient estimation and model selection. Mild conditions are given on the penalty function and the innovation process, under which the PCMLE satisfies a strong consistency, local $N^{-1/2}$ consistency, and oracle property, respectively, where N is sample size. Two penalty functions, least absolute shrinkage and selection operator (LASSO) and smoothly clipped average deviation (SCAD), are considered as examples, and SCAD is shown to have better performances than LASSO. A simulation study confirms our theoretical results. At the end, we provide an application of our method to a historical price data of the US Industrial Production Index for consumer goods, and the result is very promising.

  17. GREENSCOPE: A Method for Modeling Chemical Process ...

    Science.gov (United States)

    Current work within the U.S. Environmental Protection Agency’s National Risk Management Research Laboratory is focused on the development of a method for modeling chemical process sustainability. The GREENSCOPE methodology, defined for the four bases of Environment, Economics, Efficiency, and Energy, can evaluate processes with over a hundred different indicators. These indicators provide a means for realizing the principles of green chemistry and green engineering in the context of sustainability. Development of the methodology has centered around three focal points. One is a taxonomy of impacts that describe the indicators and provide absolute scales for their evaluation. The setting of best and worst limits for the indicators allows the user to know the status of the process under study in relation to understood values. Thus, existing or imagined processes can be evaluated according to their relative indicator scores, and process modifications can strive towards realizable targets. A second area of focus is in advancing definitions of data needs for the many indicators of the taxonomy. Each of the indicators has specific data that is necessary for their calculation. Values needed and data sources have been identified. These needs can be mapped according to the information source (e.g., input stream, output stream, external data, etc.) for each of the bases. The user can visualize data-indicator relationships on the way to choosing selected ones for evalua

  18. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  19. modeling grinding modeling grinding processes as micro processes ...

    African Journals Online (AJOL)

    eobe

    workpiece material dynamics thus allowing for process planning, optimization, and control. In spite of the .... arrangement of the grain vertices at the wheel active surface. ...... on Workpiece Roughness and Process Vibration” J. of the Braz.

  20. Novel roles for selected genes in meiotic DNA processing.

    Directory of Open Access Journals (Sweden)

    Philip W Jordan

    2007-12-01

    Full Text Available High-throughput studies of the 6,200 genes of Saccharomyces cerevisiae have provided valuable data resources. However, these resources require a return to experimental analysis to test predictions. An in-silico screen, mining existing interaction, expression, localization, and phenotype datasets was developed with the aim of selecting minimally characterized genes involved in meiotic DNA processing. Based on our selection procedure, 81 deletion mutants were constructed and tested for phenotypic abnormalities. Eleven (13.6% genes were identified to have novel roles in meiotic DNA processes including DNA replication, recombination, and chromosome segregation. In particular, this analysis showed that Def1, a protein that facilitates ubiquitination of RNA polymerase II as a response to DNA damage, is required for efficient synapsis between homologues and normal levels of crossover recombination during meiosis. These characteristics are shared by a group of proteins required for Zip1 loading (ZMM proteins. Additionally, Soh1/Med31, a subunit of the RNA pol II mediator complex, Bre5, a ubiquitin protease cofactor and an uncharacterized protein, Rmr1/Ygl250w, are required for normal levels of gene conversion events during meiosis. We show how existing datasets may be used to define gene sets enriched for specific roles and how these can be evaluated by experimental analysis.

  1. Bayesian nonparametric centered random effects models with variable selection.

    Science.gov (United States)

    Yang, Mingan

    2013-03-01

    In a linear mixed effects model, it is common practice to assume that the random effects follow a parametric distribution such as a normal distribution with mean zero. However, in the case of variable selection, substantial violation of the normality assumption can potentially impact the subset selection and result in poor interpretation and even incorrect results. In nonparametric random effects models, the random effects generally have a nonzero mean, which causes an identifiability problem for the fixed effects that are paired with the random effects. In this article, we focus on a Bayesian method for variable selection. We characterize the subject-specific random effects nonparametrically with a Dirichlet process and resolve the bias simultaneously. In particular, we propose flexible modeling of the conditional distribution of the random effects with changes across the predictor space. The approach is implemented using a stochastic search Gibbs sampler to identify subsets of fixed effects and random effects to be included in the model. Simulations are provided to evaluate and compare the performance of our approach to the existing ones. We then apply the new approach to a real data example, cross-country and interlaboratory rodent uterotrophic bioassay.

  2. Managing Analysis Models in the Design Process

    Science.gov (United States)

    Briggs, Clark

    2006-01-01

    Design of large, complex space systems depends on significant model-based support for exploration of the design space. Integrated models predict system performance in mission-relevant terms given design descriptions and multiple physics-based numerical models. Both the design activities and the modeling activities warrant explicit process definitions and active process management to protect the project from excessive risk. Software and systems engineering processes have been formalized and similar formal process activities are under development for design engineering and integrated modeling. JPL is establishing a modeling process to define development and application of such system-level models.

  3. Processing of Feature Selectivity in Cortical Networks with Specific Connectivity.

    Directory of Open Access Journals (Sweden)

    Sadra Sadeh

    Full Text Available Although non-specific at the onset of eye opening, networks in rodent visual cortex attain a non-random structure after eye opening, with a specific bias for connections between neurons of similar preferred orientations. As orientation selectivity is already present at eye opening, it remains unclear how this specificity in network wiring contributes to feature selectivity. Using large-scale inhibition-dominated spiking networks as a model, we show that feature-specific connectivity leads to a linear amplification of feedforward tuning, consistent with recent electrophysiological single-neuron recordings in rodent neocortex. Our results show that optimal amplification is achieved at an intermediate regime of specific connectivity. In this configuration a moderate increase of pairwise correlations is observed, consistent with recent experimental findings. Furthermore, we observed that feature-specific connectivity leads to the emergence of orientation-selective reverberating activity, and entails pattern completion in network responses. Our theoretical analysis provides a mechanistic understanding of subnetworks' responses to visual stimuli, and casts light on the regime of operation of sensory cortices in the presence of specific connectivity.

  4. Cupola Furnace Computer Process Model

    Energy Technology Data Exchange (ETDEWEB)

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

  5. Algorithms of control parameters selection for automation of FDM 3D printing process

    Directory of Open Access Journals (Sweden)

    Kogut Paweł

    2017-01-01

    Full Text Available The paper presents algorithms of control parameters selection of the Fused Deposition Modelling (FDM technology in case of an open printing solutions environment and 3DGence ONE printer. The following parameters were distinguished: model mesh density, material flow speed, cooling performance, retraction and printing speeds. These parameters are independent in principle printing system, but in fact to a certain degree that results from the selected printing equipment features. This is the first step for automation of the 3D printing process in FDM technology.

  6. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  7. Process Correlation Analysis Model for Process Improvement Identification

    Directory of Open Access Journals (Sweden)

    Su-jin Choi

    2014-01-01

    software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  8. A Process Model for Establishing Business Process Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Nguyen Hoang Thuan

    2017-06-01

    Full Text Available Crowdsourcing can be an organisational strategy to distribute work to Internet users and harness innovation, information, capacities, and variety of business endeavours. As crowdsourcing is different from other business strategies, organisations are often unsure as to how to best structure different crowdsourcing activities and integrate them with other organisational business processes. To manage this problem, we design a process model guiding how to establish business process crowdsourcing. The model consists of seven components covering the main activities of crowdsourcing processes, which are drawn from a knowledge base incorporating diverse knowledge sources in the domain. The built model is evaluated using case studies, suggesting the adequateness and utility of the model.

  9. Nanoemulsion: process selection and application in cosmetics--a review.

    Science.gov (United States)

    Yukuyama, M N; Ghisleni, D D M; Pinto, T J A; Bou-Chacra, N A

    2016-02-01

    In recent decades, considerable and continuous growth in consumer demand in the cosmetics field has spurred the development of sophisticated formulations, aiming at high performance, attractive appearance, sensorial benefit and safety. Yet despite increasing demand from consumers, the formulator faces certain restrictions regarding the optimum equilibrium between the active compound concentration and the formulation base taking into account the nature of the skin structure, mainly concerning to the ideal penetration of the active compound, due to the natural skin barrier. Emulsion is a mixture of two immiscible phases, and the interest in nanoscale emulsion has been growing considerably in recent decades due to its specific attributes such as high stability, attractive appearance and drug delivery properties; therefore, performance is expected to improve using a lipid-based nanocarrier. Nanoemulsions are generated by different approaches: the so-called high-energy and low-energy methods. The global overview of these mechanisms and different alternatives for each method are presented in this paper, along with their benefits and drawbacks. As a cosmetics formulation is reflected in product delivery to consumers, nanoemulsion development with prospects for large-scale production is one of the key attributes in the method selection process. Thus, the aim of this review was to highlight the main high- and low-energy methods applicable in cosmetics and dermatological product development, their specificities, recent research on these methods in the cosmetics and consideration for the process selection optimization. The specific process with regard to inorganic nanoparticles, polymer nanoparticles and nanocapsule formulation is not considered in this paper.

  10. A CONCEPTUAL MODEL FOR IMPROVED PROJECT SELECTION AND PRIORITISATION

    Directory of Open Access Journals (Sweden)

    P. J. Viljoen

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Project portfolio management processes are often designed and operated as a series of stages (or project phases and gates. However, the flow of such a process is often slow, characterised by queues waiting for a gate decision and by repeated work from previous stages waiting for additional information or for re-processing. In this paper the authors propose a conceptual model that applies supply chain and constraint management principles to the project portfolio management process. An advantage of the proposed model is that it provides the ability to select and prioritise projects without undue changes to project schedules. This should result in faster flow through the system.

    AFRIKAANSE OPSOMMING: Prosesse om portefeuljes van projekte te bestuur word normaalweg ontwerp en bedryf as ’n reeks fases en hekke. Die vloei deur so ’n proses is dikwels stadig en word gekenmerk deur toue wat wag vir besluite by die hekke en ook deur herwerk van vorige fases wat wag vir verdere inligting of vir herprosessering. In hierdie artikel word ‘n konseptuele model voorgestel. Die model berus op die beginsels van voorsieningskettings sowel as van beperkingsbestuur, en bied die voordeel dat projekte geselekteer en geprioritiseer kan word sonder onnodige veranderinge aan projekskedules. Dit behoort te lei tot versnelde vloei deur die stelsel.

  11. Efficiency of model selection criteria in flood frequency analysis

    Science.gov (United States)

    Calenda, G.; Volpi, E.

    2009-04-01

    need to select an optimum distribution among various models, calibrated using different samples, was prompted by the complex behaviour observed in flood samples, that can be ascribed to different identifiable processes contributing to the generation of the data series. REFERENCES Di Baldassarre, G., Laio, F., Montanari, A., (2008). Design flood estimation using model selection criteria, Physics and Chemistry of the Earth, doi: 10/1016/j.pce.2008.10.066. Calenda G., Mancini C.P., Volpi E., Selection of the probabilistic model of extreme floods: the case of the River Tiber in Rome, Submitted to Journal of Hydrology.

  12. Ensemble feature selection integrating elitist roles and quantum game model

    Institute of Scientific and Technical Information of China (English)

    Weiping Ding; Jiandong Wang; Zhijin Guan; Quan Shi

    2015-01-01

    To accelerate the selection process of feature subsets in the rough set theory (RST), an ensemble elitist roles based quantum game (EERQG) algorithm is proposed for feature selec-tion. Firstly, the multilevel elitist roles based dynamics equilibrium strategy is established, and both immigration and emigration of elitists are able to be self-adaptive to balance between exploration and exploitation for feature selection. Secondly, the utility matrix of trust margins is introduced to the model of multilevel elitist roles to enhance various elitist roles’ performance of searching the optimal feature subsets, and the win-win utility solutions for feature selec-tion can be attained. Meanwhile, a novel ensemble quantum game strategy is designed as an intriguing exhibiting structure to perfect the dynamics equilibrium of multilevel elitist roles. Final y, the en-semble manner of multilevel elitist roles is employed to achieve the global minimal feature subset, which wil greatly improve the fea-sibility and effectiveness. Experiment results show the proposed EERQG algorithm has superiority compared to the existing feature selection algorithms.

  13. Transitions in a genotype selection model driven by coloured noises

    Institute of Scientific and Technical Information of China (English)

    Wang Can-Jun; Mei Dong-Cheng

    2008-01-01

    This paper investigates a genotype selection model subjected to both a multiplicative coloured noise and an additive coloured noise with different correlation time T1 and T2 by means of the numerical technique.By directly simulating the Langevin Equation,the following results are obtained.(1) The multiplicative coloured noise dominates,however,the effect of the additive coloured noise is not neglected in the practical gene selection process.The selection rate μ decides that the selection is propitious to gene A haploid or gene B haploid.(2) The additive coloured noise intensity α and the correlation time T2 play opposite roles.It is noted that α and T2 can not separate the single peak,while αcan make the peak disappear and T2 can make the peak be sharp.(3) The multiplicative coloured noise intensity D and the correlation time T1 can induce phase transition,at the same time they play opposite roles and the reentrance phenomenon appears.In this case,it is easy to select one type haploid from the group with increasing D and decreasing T1.

  14. CONVERGENCE TO PROCESS ORGANIZATION BY MODEL OF PROCESS MATURITY

    Directory of Open Access Journals (Sweden)

    Blaženka Piuković Babičković

    2015-06-01

    Full Text Available With modern business process orientation binds primarily, process of thinking and process organizational structure. Although the business processes are increasingly a matter of writing and speaking, it is a major problem among the business world, especially in countries in transition, where it has been found that there is a lack of understanding of the concept of business process management. The aim of this paper is to give a specific contribution to overcoming the identified problem, by pointing out the significance of the concept of business process management, as well as the representation of the model for review of process maturity and tools that are recommended for use in process management.

  15. A qualitative model structure sensitivity analysis method to support model selection

    Science.gov (United States)

    Van Hoey, S.; Seuntjens, P.; van der Kwast, J.; Nopens, I.

    2014-11-01

    The selection and identification of a suitable hydrological model structure is a more challenging task than fitting parameters of a fixed model structure to reproduce a measured hydrograph. The suitable model structure is highly dependent on various criteria, i.e. the modeling objective, the characteristics and the scale of the system under investigation and the available data. Flexible environments for model building are available, but need to be assisted by proper diagnostic tools for model structure selection. This paper introduces a qualitative method for model component sensitivity analysis. Traditionally, model sensitivity is evaluated for model parameters. In this paper, the concept is translated into an evaluation of model structure sensitivity. Similarly to the one-factor-at-a-time (OAT) methods for parameter sensitivity, this method varies the model structure components one at a time and evaluates the change in sensitivity towards the output variables. As such, the effect of model component variations can be evaluated towards different objective functions or output variables. The methodology is presented for a simple lumped hydrological model environment, introducing different possible model building variations. By comparing the effect of changes in model structure for different model objectives, model selection can be better evaluated. Based on the presented component sensitivity analysis of a case study, some suggestions with regard to model selection are formulated for the system under study: (1) a non-linear storage component is recommended, since it ensures more sensitive (identifiable) parameters for this component and less parameter interaction; (2) interflow is mainly important for the low flow criteria; (3) excess infiltration process is most influencing when focussing on the lower flows; (4) a more simple routing component is advisable; and (5) baseflow parameters have in general low sensitivity values, except for the low flow criteria.

  16. The Hierarchical Sparse Selection Model of Visual Crowding

    Directory of Open Access Journals (Sweden)

    Wesley eChaney

    2014-09-01

    Full Text Available Because the environment is cluttered, objects rarely appear in isolation. The visual system must therefore attentionally select behaviorally relevant objects from among many irrelevant ones. A limit on our ability to select individual objects is revealed by the phenomenon of visual crowding: an object seen in the periphery, easily recognized in isolation, can become impossible to identify when surrounded by other, similar objects. The neural basis of crowding is hotly debated: while prevailing theories hold that crowded information is irrecoverable – destroyed due to over-integration in early-stage visual processing – recent evidence demonstrates otherwise. Crowding can occur between high-level, configural object representations, and crowded objects can contribute with high precision to judgments about the gist of a group of objects, even when they are individually unrecognizable. While existing models can account for the basic diagnostic criteria of crowding (e.g. specific critical spacing, spatial anisotropies, and temporal tuning, no present model explains how crowding can operate simultaneously at multiple levels in the visual processing hierarchy, including at the level of whole objects. Here, we present a new model of visual crowding— the hierarchical sparse selection (HSS model, which accounts for object-level crowding, as well as a number of puzzling findings in the recent literature. Counter to existing theories, we posit that crowding occurs not due to degraded visual representations in the brain, but due to impoverished sampling of visual representations for the sake of perception. The HSS model unifies findings from a disparate array of visual crowding studies and makes testable predictions about how information in crowded scenes can be accessed.

  17. The hierarchical sparse selection model of visual crowding.

    Science.gov (United States)

    Chaney, Wesley; Fischer, Jason; Whitney, David

    2014-01-01

    Because the environment is cluttered, objects rarely appear in isolation. The visual system must therefore attentionally select behaviorally relevant objects from among many irrelevant ones. A limit on our ability to select individual objects is revealed by the phenomenon of visual crowding: an object seen in the periphery, easily recognized in isolation, can become impossible to identify when surrounded by other, similar objects. The neural basis of crowding is hotly debated: while prevailing theories hold that crowded information is irrecoverable - destroyed due to over-integration in early stage visual processing - recent evidence demonstrates otherwise. Crowding can occur between high-level, configural object representations, and crowded objects can contribute with high precision to judgments about the "gist" of a group of objects, even when they are individually unrecognizable. While existing models can account for the basic diagnostic criteria of crowding (e.g., specific critical spacing, spatial anisotropies, and temporal tuning), no present model explains how crowding can operate simultaneously at multiple levels in the visual processing hierarchy, including at the level of whole objects. Here, we present a new model of visual crowding-the hierarchical sparse selection (HSS) model, which accounts for object-level crowding, as well as a number of puzzling findings in the recent literature. Counter to existing theories, we posit that crowding occurs not due to degraded visual representations in the brain, but due to impoverished sampling of visual representations for the sake of perception. The HSS model unifies findings from a disparate array of visual crowding studies and makes testable predictions about how information in crowded scenes can be accessed.

  18. Event-driven process execution model for process virtual machine

    Institute of Scientific and Technical Information of China (English)

    WU Dong-yao; WEI Jun; GAO Chu-shu; DOU Wen-shen

    2012-01-01

    Current orchestration and choreography process engines only serve with dedicate process languages. To solve these problems, an Even~driven Process Execution Model (EPEM) was developed. Formalization and map- ping principles of the model were presented to guarantee the correctness and efficiency for process transformation. As a case study, the EPEM descriptions of Web Services Business Process Execution Language (WS~BPEL) were represented and a Process Virtual Machine (PVM)-OncePVM was implemented in compliance with the EPEM.

  19. Analog modelling of obduction processes

    Science.gov (United States)

    Agard, P.; Zuo, X.; Funiciello, F.; Bellahsen, N.; Faccenna, C.; Savva, D.

    2012-04-01

    Obduction corresponds to one of plate tectonics oddities, whereby dense, oceanic rocks (ophiolites) are presumably 'thrust' on top of light, continental ones, as for the short-lived, almost synchronous Peri-Arabic obduction (which took place along thousands of km from Turkey to Oman in c. 5-10 Ma). Analog modelling experiments were performed to study the mechanisms of obduction initiation and test various triggering hypotheses (i.e., plate acceleration, slab hitting the 660 km discontinuity, ridge subduction; Agard et al., 2007). The experimental setup comprises (1) an upper mantle, modelled as a low-viscosity transparent Newtonian glucose syrup filling a rigid Plexiglas tank and (2) high-viscosity silicone plates (Rhodrosil Gomme with PDMS iron fillers to reproduce densities of continental or oceanic plates), located at the centre of the tank above the syrup to simulate the subducting and the overriding plates - and avoid friction on the sides of the tank. Convergence is simulated by pushing on a piston at one end of the model with velocities comparable to those of plate tectonics (i.e., in the range 1-10 cm/yr). The reference set-up includes, from one end to the other (~60 cm): (i) the piston, (ii) a continental margin containing a transition zone to the adjacent oceanic plate, (iii) a weakness zone with variable resistance and dip (W), (iv) an oceanic plate - with or without a spreading ridge, (v) a subduction zone (S) dipping away from the piston and (vi) an upper, active continental margin, below which the oceanic plate is being subducted at the start of the experiment (as is known to have been the case in Oman). Several configurations were tested and over thirty different parametric tests were performed. Special emphasis was placed on comparing different types of weakness zone (W) and the extent of mechanical coupling across them, particularly when plates were accelerated. Displacements, together with along-strike and across-strike internal deformation in all

  20. Business process modeling for processing classified documents using RFID technology

    Directory of Open Access Journals (Sweden)

    Koszela Jarosław

    2016-01-01

    Full Text Available The article outlines the application of the processing approach to the functional description of the designed IT system supporting the operations of the secret office, which processes classified documents. The article describes the application of the method of incremental modeling of business processes according to the BPMN model to the description of the processes currently implemented (“as is” in a manual manner and target processes (“to be”, using the RFID technology for the purpose of their automation. Additionally, the examples of applying the method of structural and dynamic analysis of the processes (process simulation to verify their correctness and efficiency were presented. The extension of the process analysis method is a possibility of applying the warehouse of processes and process mining methods.

  1. Process correlation analysis model for process improvement identification.

    Science.gov (United States)

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  2. Parameter optimization model in electrical discharge machining process

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Electrical discharge machining (EDM) process, at present is still an experience process, wherein selected parameters are often far from the optimum, and at the same time selecting optimization parameters is costly and time consuming. In this paper,artificial neural network (ANN) and genetic algorithm (GA) are used together to establish the parameter optimization model. An ANN model which adapts Levenberg-Marquardt algorithm has been set up to represent the relationship between material removal rate (MRR) and input parameters, and GA is used to optimize parameters, so that optimization results are obtained. The model is shown to be effective, and MRR is improved using optimized machining parameters.

  3. Self-Repair and Language Selection in Bilingual Speech Processing

    Directory of Open Access Journals (Sweden)

    Inga Hennecke

    2013-07-01

    Full Text Available In psycholinguistic research the exact level of language selection in bilingual lexical access is still controversial and current models of bilingual speech production offer conflicting statements about the mechanisms and location of language selection. This paper aims to provide a corpus analysis of self-repair mechanisms in code-switching contexts of highly fluent bilingual speakers in order to gain further insights into bilingual speech production. The present paper follows the assumptions of the Selection by Proficiency model, which claims that language proficiency and lexical robustness determine the mechanism and level of language selection. In accordance with this hypothesis, highly fluent bilinguals select languages at a prelexical level, which should influence the occurrence of self-repairs in bilingual speech. A corpus of natural speech data of highly fluent and balanced bilingual French-English speakers of the Canadian French variety Franco-Manitoban serves as the basis for a detailed analysis of different self-repair mechanisms in code-switching environments. Although the speech data contain a large amount of code-switching, results reveal that only a few speech errors and self-repairs occur in direct code-switching environments. A detailed analysis of the respective starting point of code-switching and the different repair mechanisms supports the hypothesis that highly proficient bilinguals do not select languages at the lexical level.Le niveau exact de la sélection des langues lors de l’accès lexical chez le bilingue reste une question controversée dans la recherche psycholinguistique. Les modèles actuels de la production verbale bilingue proposent des arguments contradictoires concernant le mécanisme et le lieu de la sélection des langues. La présente recherche vise à fournir une analyse de corpus mettant l’accent sur les mécanismes d’autoréparation dans le contexte d’alternance codique dans la production verbale

  4. Selecting a Control Strategy for Plug and Process Loads

    Energy Technology Data Exchange (ETDEWEB)

    Lobato, C.; Sheppy, M.; Brackney, L.; Pless, S.; Torcellini, P.

    2012-09-01

    Plug and Process Loads (PPLs) are building loads that are not related to general lighting, heating, ventilation, cooling, and water heating, and typically do not provide comfort to the building occupants. PPLs in commercial buildings account for almost 5% of U.S. primary energy consumption. On an individual building level, they account for approximately 25% of the total electrical load in a minimally code-compliant commercial building, and can exceed 50% in an ultra-high efficiency building such as the National Renewable Energy Laboratory's (NREL) Research Support Facility (RSF) (Lobato et al. 2010). Minimizing these loads is a primary challenge in the design and operation of an energy-efficient building. A complex array of technologies that measure and manage PPLs has emerged in the marketplace. Some fall short of manufacturer performance claims, however. NREL has been actively engaged in developing an evaluation and selection process for PPLs control, and is using this process to evaluate a range of technologies for active PPLs management that will cap RSF plug loads. Using a control strategy to match plug load use to users' required job functions is a huge untapped potential for energy savings.

  5. The detection of observations possibly influential for model selection

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1991-01-01

    textabstractModel selection can involve several variables and selection criteria. A simple method to detect observations possibly influential for model selection is proposed. The potentials of this method are illustrated with three examples, each of which is taken from related studies.

  6. ExoMars 2018 Landing Site Selection Process

    Science.gov (United States)

    Vago, Jorge L.; Kminek, Gerhard; Rodionov, Daniel

    The ExoMars 2018 mission will include two science elements: a Rover and a Surface Platform. The ExoMars Rover will carry a comprehensive suite of instruments dedicated to geology and exobiology research named after Louis Pasteur. The Rover will be able to travel several kilometres searching for traces of past and present signs of life. It will do this by collecting and analysing samples from outcrops, and from the subsurface—down to 2-m depth. The very powerful combination of mobility with the ability to access locations where organic molecules can be well preserved is unique to this mission. After the Rover will have egressed, the ExoMars Surface Platform will begin its science mission to study the surface environment at the landing location. This talk will describe the landing site selection process and introduce the scientific, planetary protection, and engineering requirements that candidate landing sites must comply with in order to be considered for the mission.

  7. Selection criteria for waste management processes in manned space missions.

    Science.gov (United States)

    Doll, S; Cothran, B; McGhee, J

    1991-10-01

    Management of waste produced during manned space exploration missions will be an important function of advanced life support systems. Waste materials can be thrown away or recovered for reuse. The first approach relies totally on external supplies to replace depleted resources while the second approach regenerates resources internally. The selection of appropriate waste management processes will be based upon criteria which include mission and hardware characteristics as well as overall system considerations. Mission characteristics discussed include destination, duration, crew size, operating environment, and transportation costs. Hardware characteristics include power, mass and volume requirements as well as suitability for a given task. Overall system considerations are essential to assure optimization for the entire mission rather than for an individual system. For example, a waste management system designed for a short trip to the moon will probably not be the best one for an extended mission to Mars. The purpose of this paper is to develop a methodology to identify and compare viable waste management options for selection of an appropriate waste management system.

  8. Selection of Vendor Based on Intuitionistic Fuzzy Analytical Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Prabjot Kaur

    2014-01-01

    Full Text Available Business environment is characterized by greater domestic and international competitive position in the global market. Vendors play a key role in achieving the so-called corporate competition. It is not easy however to identify good vendors because evaluation is based on multiple criteria. In practice, for VSP most of the input information about the criteria is not known precisely. Intuitionistic fuzzy set is an extension of the classical fuzzy set theory (FST, which is a suitable way to deal with impreciseness. In other words, the application of intuitionistic fuzzy sets instead of fuzzy sets means the introduction of another degree of freedom called nonmembership function into the set description. In this paper, we proposed a triangular intuitionistic fuzzy number based approach for the vendor selection problem using analytical hierarchy process. The crisp data of the vendors is represented in the form of triangular intuitionistic fuzzy numbers. By applying AHP which involves decomposition, pairwise comparison, and deriving priorities for the various levels of the hierarchy, an overall crisp priority is obtained for ranking the best vendor. A numerical example illustrates our method. Lastly a sensitivity analysis is performed to find the most critical criterion on the basis of which vendor is selected.

  9. Supercritical boiler material selection using fuzzy analytic network process

    Directory of Open Access Journals (Sweden)

    Saikat Ranjan Maity

    2012-08-01

    Full Text Available The recent development of world is being adversely affected by the scarcity of power and energy. To survive in the next generation, it is thus necessary to explore the non-conventional energy sources and efficiently consume the available sources. For efficient exploitation of the existing energy sources, a great scope lies in the use of Rankin cycle-based thermal power plants. Today, the gross efficiency of Rankin cycle-based thermal power plants is less than 28% which has been increased up to 40% with reheating and regenerative cycles. But, it can be further improved up to 47% by using supercritical power plant technology. Supercritical power plants use supercritical boilers which are able to withstand a very high temperature (650-720˚C and pressure (22.1 MPa while producing superheated steam. The thermal efficiency of a supercritical boiler greatly depends on the material of its different components. The supercritical boiler material should possess high creep rupture strength, high thermal conductivity, low thermal expansion, high specific heat and very high temperature withstandability. This paper considers a list of seven supercritical boiler materials whose performance is evaluated based on seven pivotal criteria. Given the intricacy and difficulty of this supercritical boiler material selection problem having interactions and interdependencies between different criteria, this paper applies fuzzy analytic network process to select the most appropriate material for a supercritical boiler. Rene 41 is the best supercritical boiler material, whereas, Haynes 230 is the worst preferred choice.

  10. Selecting, weeding, and weighting biased climate model ensembles

    Science.gov (United States)

    Jackson, C. S.; Picton, J.; Huerta, G.; Nosedal Sanchez, A.

    2012-12-01

    In the Bayesian formulation, the "log-likelihood" is a test statistic for selecting, weeding, or weighting climate model ensembles with observational data. This statistic has the potential to synthesize the physical and data constraints on quantities of interest. One of the thorny issues for formulating the log-likelihood is how one should account for biases. While in the past we have included a generic discrepancy term, not all biases affect predictions of quantities of interest. We make use of a 165-member ensemble CAM3.1/slab ocean climate models with different parameter settings to think through the issues that are involved with predicting each model's sensitivity to greenhouse gas forcing given what can be observed from the base state. In particular we use multivariate empirical orthogonal functions to decompose the differences that exist among this ensemble to discover what fields and regions matter to the model's sensitivity. We find that the differences that matter are a small fraction of the total discrepancy. Moreover, weighting members of the ensemble using this knowledge does a relatively poor job of adjusting the ensemble mean toward the known answer. This points out the shortcomings of using weights to correct for biases in climate model ensembles created by a selection process that does not emphasize the priorities of your log-likelihood.

  11. Characteristics of products generated by selective sintering and stereolithography rapid prototyping processes

    Science.gov (United States)

    Cariapa, Vikram

    1993-01-01

    The trend in the modern global economy towards free market policies has motivated companies to use rapid prototyping technologies to not only reduce product development cycle time but also to maintain their competitive edge. A rapid prototyping technology is one which combines computer aided design with computer controlled tracking of focussed high energy source (eg. lasers, heat) on modern ceramic powders, metallic powders, plastics or photosensitive liquid resins in order to produce prototypes or models. At present, except for the process of shape melting, most rapid prototyping processes generate products that are only dimensionally similar to those of the desired end product. There is an urgent need, therefore, to enhance the understanding of the characteristics of these processes in order to realize their potential for production. Currently, the commercial market is dominated by four rapid prototyping processes, namely selective laser sintering, stereolithography, fused deposition modelling and laminated object manufacturing. This phase of the research has focussed on the selective laser sintering and stereolithography rapid prototyping processes. A theoretical model for these processes is under development. Different rapid prototyping sites supplied test specimens (based on ASTM 638-84, Type I) that have been measured and tested to provide a data base on surface finish, dimensional variation and ultimate tensile strength. Further plans call for developing and verifying the theoretical models by carefully designed experiments. This will be a joint effort between NASA and other prototyping centers to generate a larger database, thus encouraging more widespread usage by product designers.

  12. Development of Solar Drying Model for Selected Cambodian Fish Species

    Directory of Open Access Journals (Sweden)

    Anna Hubackova

    2014-01-01

    Full Text Available A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6°C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg·h−1. Based on coefficient of determination (R2, chi-square (χ2 test, and root-mean-square error (RMSE, the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing.

  13. Development of solar drying model for selected Cambodian fish species.

    Science.gov (United States)

    Hubackova, Anna; Kucerova, Iva; Chrun, Rithy; Chaloupkova, Petra; Banout, Jan

    2014-01-01

    A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6 °C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg · h(-1). Based on coefficient of determination (R(2)), chi-square (χ(2)) test, and root-mean-square error (RMSE), the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing.

  14. Selection of models to calculate the LLW source term

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, T.M. (Brookhaven National Lab., Upton, NY (United States))

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab.

  15. Selective experimental review of the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Bloom, E.D.

    1985-02-01

    Before disussing experimental comparisons with the Standard Model, (S-M) it is probably wise to define more completely what is commonly meant by this popular term. This model is a gauge theory of SU(3)/sub f/ x SU(2)/sub L/ x U(1) with 18 parameters. The parameters are ..cap alpha../sub s/, ..cap alpha../sub qed/, theta/sub W/, M/sub W/ (M/sub Z/ = M/sub W//cos theta/sub W/, and thus is not an independent parameter), M/sub Higgs/; the lepton masses, M/sub e/, M..mu.., M/sub r/; the quark masses, M/sub d/, M/sub s/, M/sub b/, and M/sub u/, M/sub c/, M/sub t/; and finally, the quark mixing angles, theta/sub 1/, theta/sub 2/, theta/sub 3/, and the CP violating phase delta. The latter four parameters appear in the quark mixing matrix for the Kobayashi-Maskawa and Maiani forms. Clearly, the present S-M covers an enormous range of physics topics, and the author can only lightly cover a few such topics in this report. The measurement of R/sub hadron/ is fundamental as a test of the running coupling constant ..cap alpha../sub s/ in QCD. The author will discuss a selection of recent precision measurements of R/sub hadron/, as well as some other techniques for measuring ..cap alpha../sub s/. QCD also requires the self interaction of gluons. The search for the three gluon vertex may be practically realized in the clear identification of gluonic mesons. The author will present a limited review of recent progress in the attempt to untangle such mesons from the plethora q anti q states of the same quantum numbers which exist in the same mass range. The electroweak interactions provide some of the strongest evidence supporting the S-M that exists. Given the recent progress in this subfield, and particularly with the discovery of the W and Z bosons at CERN, many recent reviews obviate the need for further discussion in this report. In attempting to validate a theory, one frequently searches for new phenomena which would clearly invalidate it. 49 references, 28 figures.

  16. Applying a Hybrid MCDM Model for Six Sigma Project Selection

    Directory of Open Access Journals (Sweden)

    Fu-Kwun Wang

    2014-01-01

    Full Text Available Six Sigma is a project-driven methodology; the projects that provide the maximum financial benefits and other impacts to the organization must be prioritized. Project selection (PS is a type of multiple criteria decision making (MCDM problem. In this study, we present a hybrid MCDM model combining the decision-making trial and evaluation laboratory (DEMATEL technique, analytic network process (ANP, and the VIKOR method to evaluate and improve Six Sigma projects for reducing performance gaps in each criterion and dimension. We consider the film printing industry of Taiwan as an empirical case. The results show that our study not only can use the best project selection, but can also be used to analyze the gaps between existing performance values and aspiration levels for improving the gaps in each dimension and criterion based on the influential network relation map.

  17. Scaling limits of a model for selection at two scales

    Science.gov (United States)

    Luo, Shishi; Mattingly, Jonathan C.

    2017-04-01

    The dynamics of a population undergoing selection is a central topic in evolutionary biology. This question is particularly intriguing in the case where selective forces act in opposing directions at two population scales. For example, a fast-replicating virus strain outcompetes slower-replicating strains at the within-host scale. However, if the fast-replicating strain causes host morbidity and is less frequently transmitted, it can be outcompeted by slower-replicating strains at the between-host scale. Here we consider a stochastic ball-and-urn process which models this type of phenomenon. We prove the weak convergence of this process under two natural scalings. The first scaling leads to a deterministic nonlinear integro-partial differential equation on the interval [0,1] with dependence on a single parameter, λ. We show that the fixed points of this differential equation are Beta distributions and that their stability depends on λ and the behavior of the initial data around 1. The second scaling leads to a measure-valued Fleming–Viot process, an infinite dimensional stochastic process that is frequently associated with a population genetics.

  18. Evaluation of Select Surface Processing Techniques for In Situ Application During the Additive Manufacturing Build Process

    Science.gov (United States)

    Book, Todd A.; Sangid, Michael D.

    2016-07-01

    Although additive manufacturing offers numerous performance advantages for different applications, it is not being used for critical applications due to uncertainties in structural integrity as a result of innate process variability and defects. To minimize uncertainty, the current approach relies on the concurrent utilization of process monitoring, post-processing, and non-destructive inspection in addition to an extensive material qualification process. This paper examines an alternative approach by evaluating the application of select surface process techniques, to include sliding severe plastic deformation (SPD) and fine particle shot peening, on direct metal laser sintering-produced AlSi10Mg materials. Each surface processing technique is compared to baseline as-built and post-processed samples as a proof of concept for surface enhancement. Initial results pairing sliding SPD with the manufacture's recommended thermal stress relief cycle demonstrated uniform recrystallization of the microstructure, resulting in a more homogeneous distribution of strain among the microstructure than as-built or post-processed conditions. This result demonstrates the potential for the in situ application of various surface processing techniques during the layerwise direct metal laser sintering build process.

  19. Dealing with selection bias in educational transition models

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier

    2011-01-01

    This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational...... transitions to be correlated across transitions. We use simulated and real data to illustrate how the BPSM improves on the traditional Mare model in terms of correcting for selection bias and providing credible estimates of the effect of family background on educational success. We conclude that models which...... account for selection on unobserved variables and high-quality data are both required in order to estimate credible educational transition models....

  20. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Walker, Anthony P. [Environmental Sciences Division and Climate Change Science Institute, Oak Ridge National Laboratory, Oak Ridge Tennessee USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA

    2017-04-01

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averaging methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.

  1. Model application for acid mine drainage treatment processes

    Directory of Open Access Journals (Sweden)

    Nantaporn Noosai, Vineeth Vijayan, Khokiat Kengskool

    2014-01-01

    Full Text Available This paper presents the utilization of the geochemical model, PHREEQC, to investigate the chemical treatment system for Acid Mine Drainage (AMD prior to the discharge. The selected treatment system consists of treatment processes commonly used for AMD including settling pond, vertical flow pond (VFP and caustic soda pond were considered in this study. The use of geochemical model for the treatment process analysis enhances the understanding of the changes in AMD’s chemistry (precipitation, reduction of metals, etc. in each process, thus, the chemical requirements (i.e., CaCO3 and NaOH for the system and the system’s treatment efficiency can be determined. The selected treatment system showed that the final effluent meet the discharge standard. The utilization of geochemical model to investigate AMD treatment processes can assist in the process design.

  2. Properties of spatial Cox process models

    DEFF Research Database (Denmark)

    Møller, Jesper

    Probabilistic properties of Cox processes of relevance for statistical modelling and inference are studied. Particularly, we study the most important classes of Cox processes, including log Gaussian Cox processes, shot noise Cox processes, and permanent Cox processes. We consider moment propertie...

  3. A verification and validation process for model-driven engineering

    Science.gov (United States)

    Delmas, R.; Pires, A. F.; Polacsek, T.

    2013-12-01

    Model Driven Engineering practitioners already benefit from many well established verification tools, for Object Constraint Language (OCL), for instance. Recently, constraint satisfaction techniques have been brought to Model-Driven Engineering (MDE) and have shown promising results on model verification tasks. With all these tools, it becomes possible to provide users with formal support from early model design phases to model instantiation phases. In this paper, a selection of such tools and methods is presented, and an attempt is made to define a verification and validation process for model design and instance creation centered on UML (Unified Modeling Language) class diagrams and declarative constraints, and involving the selected tools. The suggested process is illustrated with a simple example.

  4. Modeling process flow using diagrams

    NARCIS (Netherlands)

    Kemper, B.; de Mast, J.; Mandjes, M.

    2010-01-01

    In the practice of process improvement, tools such as the flowchart, the value-stream map (VSM), and a variety of ad hoc variants of such diagrams are commonly used. The purpose of this paper is to present a clear, precise, and consistent framework for the use of such flow diagrams in process

  5. Modeling process flow using diagrams

    NARCIS (Netherlands)

    Kemper, B.; de Mast, J.; Mandjes, M.

    2010-01-01

    In the practice of process improvement, tools such as the flowchart, the value-stream map (VSM), and a variety of ad hoc variants of such diagrams are commonly used. The purpose of this paper is to present a clear, precise, and consistent framework for the use of such flow diagrams in process improv

  6. Model for personal computer system selection.

    Science.gov (United States)

    Blide, L

    1987-12-01

    Successful computer software and hardware selection is best accomplished by following an organized approach such as the one described in this article. The first step is to decide what you want to be able to do with the computer. Secondly, select software that is user friendly, well documented, bug free, and that does what you want done. Next, you select the computer, printer and other needed equipment from the group of machines on which the software will run. Key factors here are reliability and compatibility with other microcomputers in your facility. Lastly, you select a reliable vendor who will provide good, dependable service in a reasonable time. The ability to correctly select computer software and hardware is a key skill needed by medical record professionals today and in the future. Professionals can make quality computer decisions by selecting software and systems that are compatible with other computers in their facility, allow for future net-working, ease of use, and adaptability for expansion as new applications are identified. The key to success is to not only provide for your present needs, but to be prepared for future rapid expansion and change in your computer usage as technology and your skills grow.

  7. Selected studies in HTGR reprocessing development. [KA2C process

    Energy Technology Data Exchange (ETDEWEB)

    Notz, K.J.

    1976-03-01

    Recent work at ORNL on hot cell studies, off-gas cleanup, and waste handling is reviewed. The work includes small-scale burning tests with irradiated fuels to study fission product release, development of the KALC process for the removal of /sup 85/Kr from a CO/sub 2/ stream, preliminary work on a nonfluidized bed burner, solvent extraction studies including computer modeling, characterization of reprocessing wastes, and initiation of a development program for the fixation of /sup 14/C as CaCO/sub 3/. (auth)

  8. Selection of Representative Models for Decision Analysis Under Uncertainty

    Science.gov (United States)

    Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.

    2016-03-01

    The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.

  9. The cost of ethanol production from lignocellulosic biomass -- A comparison of selected alternative processes. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Grethlein, H.E.; Dill, T.

    1993-04-30

    The purpose of this report is to compare the cost of selected alternative processes for the conversion of lignocellulosic biomass to ethanol. In turn, this information will be used by the ARS/USDA to guide the management of research and development programs in biomass conversion. The report will identify where the cost leverages are for the selected alternatives and what performance parameters need to be achieved to improve the economics. The process alternatives considered here are not exhaustive, but are selected on the basis of having a reasonable potential in improving the economics of producing ethanol from biomass. When other alternatives come under consideration, they should be evaluated by the same methodology used in this report to give fair comparisons of opportunities. A generic plant design is developed for an annual production of 25 million gallons of anhydrous ethanol using corn stover as the model substrate at $30/dry ton. Standard chemical engineering techniques are used to give first order estimates of the capital and operating costs. Following the format of the corn to ethanol plant, there are nine sections to the plant; feed preparation, pretreatment, hydrolysis, fermentation, distillation and dehydration, stillage evaporation, storage and denaturation, utilities, and enzyme production. There are three pretreatment alternatives considered: the AFEX process, the modified AFEX process (which is abbreviated as MAFEX), and the STAKETECH process. These all use enzymatic hydrolysis and so an enzyme production section is included in the plant. The STAKETECH is the only commercially available process among the alternative processes.

  10. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; van den Berg, Stéphanie Martine

    2017-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  11. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.

    2016-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  12. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.

    2017-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  13. Context Based Reasoning in Business Process Models

    OpenAIRE

    Balabko, Pavel; Wegmann, Alain

    2003-01-01

    Modeling approaches often are not adapted to human reasoning: models are ambiguous and imprecise. A same model element may have multiple meanings in different functional roles of a system. Existing modeling approaches do not relate explicitly these functional roles with model elements. A principle that can solve this problem is that model elements should be defined in a context. We believe that the explicit modeling of context is especially useful in Business Process Modeling (BPM) where the ...

  14. Process cost and facility considerations in the selection of primary cell culture clarification technology.

    Science.gov (United States)

    Felo, Michael; Christensen, Brandon; Higgins, John

    2013-01-01

    The bioreactor volume delineating the selection of primary clarification technology is not always easily defined. Development of a commercial scale process for the manufacture of therapeutic proteins requires scale-up from a few liters to thousands of liters. While the separation techniques used for protein purification are largely conserved across scales, the separation techniques for primary cell culture clarification vary with scale. Process models were developed to compare monoclonal antibody production costs using two cell culture clarification technologies. One process model was created for cell culture clarification by disc stack centrifugation with depth filtration. A second process model was created for clarification by multi-stage depth filtration. Analyses were performed to examine the influence of bioreactor volume, product titer, depth filter capacity, and facility utilization on overall operating costs. At bioreactor volumes cost savings compared to clarification using centrifugation. For bioreactor volumes >5,000 L, clarification using centrifugation followed by depth filtration offers significant cost savings. For bioreactor volumes of ∼ 2,000 L, clarification costs are similar between depth filtration and centrifugation. At this scale, factors including facility utilization, available capital, ease of process development, implementation timelines, and process performance characterization play an important role in clarification technology selection. In the case study presented, a multi-product facility selected multi-stage depth filtration for cell culture clarification at the 500 and 2,000 L scales of operation. Facility implementation timelines, process development activities, equipment commissioning and validation, scale-up effects, and process robustness are examined. © 2013 American Institute of Chemical Engineers.

  15. Modelling of Batch Process Operations

    DEFF Research Database (Denmark)

    2011-01-01

    Here a batch cooling crystalliser is modelled and simulated as is a batch distillation system. In the batch crystalliser four operational modes of the crystalliser are considered, namely: initial cooling, nucleation, crystal growth and product removal. A model generation procedure is shown that s...

  16. Birth/death process model

    Science.gov (United States)

    Solloway, C. B.; Wakeland, W.

    1976-01-01

    First-order Markov model developed on digital computer for population with specific characteristics. System is user interactive, self-documenting, and does not require user to have complete understanding of underlying model details. Contains thorough error-checking algorithms on input and default capabilities.

  17. Steady-State Process Modelling

    DEFF Research Database (Denmark)

    2011-01-01

    This chapter covers the basic principles of steady state modelling and simulation using a number of case studies. Two principal approaches are illustrated that develop the unit operation models from first principles as well as through application of standard flowsheet simulators. The approaches i...

  18. Modeling business processes: theoretical and practical aspects

    Directory of Open Access Journals (Sweden)

    V.V. Dubininа

    2015-06-01

    Full Text Available The essence of process-oriented enterprise management has been examined in the article. The content and types of information technology have been analyzed in the article, due to the complexity and differentiation of existing methods, as well as the specificity of language, terminology of the enterprise business processes modeling. The theoretical aspects of business processes modeling have been reviewed and the modern traditional modeling techniques received practical application in the visualization model of retailers activity have been studied in the article. In the process of theoretical analysis of the modeling methods found that UFO-toolkit method that has been developed by Ukrainian scientists due to it systemology integrated opportunities, is the most suitable for structural and object analysis of retailers business processes. It was designed visualized simulation model of the business process "sales" as is" of retailers using a combination UFO-elements with the aim of the further practical formalization and optimization of a given business process.

  19. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  20. Multiphysics modeling of selective laser sintering/melting

    Science.gov (United States)

    Ganeriwala, Rishi Kumar

    A significant percentage of total global employment is due to the manufacturing industry. However, manufacturing also accounts for nearly 20% of total energy usage in the United States according to the EIA. In fact, manufacturing accounted for 90% of industrial energy consumption and 84% of industry carbon dioxide emissions in 2002. Clearly, advances in manufacturing technology and efficiency are necessary to curb emissions and help society as a whole. Additive manufacturing (AM) refers to a relatively recent group of manufacturing technologies whereby one can 3D print parts, which has the potential to significantly reduce waste, reconfigure the supply chain, and generally disrupt the whole manufacturing industry. Selective laser sintering/melting (SLS/SLM) is one type of AM technology with the distinct advantage of being able to 3D print metals and rapidly produce net shape parts with complicated geometries. In SLS/SLM parts are built up layer-by-layer out of powder particles, which are selectively sintered/melted via a laser. However, in order to produce defect-free parts of sufficient strength, the process parameters (laser power, scan speed, layer thickness, powder size, etc.) must be carefully optimized. Obviously, these process parameters will vary depending on material, part geometry, and desired final part characteristics. Running experiments to optimize these parameters is costly, energy intensive, and extremely material specific. Thus a computational model of this process would be highly valuable. In this work a three dimensional, reduced order, coupled discrete element - finite difference model is presented for simulating the deposition and subsequent laser heating of a layer of powder particles sitting on top of a substrate. Validation is provided and parameter studies are conducted showing the ability of this model to help determine appropriate process parameters and an optimal powder size distribution for a given material. Next, thermal stresses upon

  1. Modeling Events with Cascades of Poisson Processes

    CERN Document Server

    Simma, Aleksandr

    2012-01-01

    We present a probabilistic model of events in continuous time in which each event triggers a Poisson process of successor events. The ensemble of observed events is thereby modeled as a superposition of Poisson processes. Efficient inference is feasible under this model with an EM algorithm. Moreover, the EM algorithm can be implemented as a distributed algorithm, permitting the model to be applied to very large datasets. We apply these techniques to the modeling of Twitter messages and the revision history of Wikipedia.

  2. The Computer-Aided Analytic Process Model. Operations Handbook for the Analytic Process Model Demonstration Package

    Science.gov (United States)

    1986-01-01

    Research Note 86-06 THE COMPUTER-AIDED ANALYTIC PROCESS MODEL : OPERATIONS HANDBOOK FOR THE ANALYTIC PROCESS MODEL DE ONSTRATION PACKAGE Ronald G...ic Process Model ; Operations Handbook; Tutorial; Apple; Systems Taxonomy Mod--l; Training System; Bradl1ey infantry Fighting * Vehicle; BIFV...8217. . . . . . . .. . . . . . . . . . . . . . . . * - ~ . - - * m- .. . . . . . . item 20. Abstract -continued companion volume-- "The Analytic Process Model for

  3. Cardinality constrained portfolio selection via factor models

    OpenAIRE

    Monge, Juan Francisco

    2017-01-01

    In this paper we propose and discuss different 0-1 linear models in order to solve the cardinality constrained portfolio problem by using factor models. Factor models are used to build portfolios to track indexes, together with other objectives, also need a smaller number of parameters to estimate than the classical Markowitz model. The addition of the cardinality constraints limits the number of securities in the portfolio. Restricting the number of securities in the portfolio allows us to o...

  4. Direct selective laser sintering of high performance metals: Machine design, process development and process control

    Science.gov (United States)

    Das, Suman

    1998-11-01

    This dissertation describes the development of an advanced manufacturing technology known as Direct Selective Laser Sintering (Direct SLS). Direct SLS is a laser based rapid manufacturing technology that enables production of functional, fully dense, metal and cermet components via the direct, layerwise consolidation of constituent powders. Specifically, this dissertation focuses on a new, hybrid net shape manufacturing technique known as Selective Laser Sintering/Hot Isostatic Pressing (SLS/HIP). The objective of research presented in this dissertation was to establish the fundamental machine technology and processing science to enable direct SLS fabrication of metal components composed of high performance, high temperature metals and alloys. Several processing requirements differentiate direct SLS of metals from SLS of polymers or polymer coated powders. Perhaps the most important distinguishing characteristic is the regime of high temperatures involved in direct SLS of metals. Biasing the temperature of the feedstock powder via radiant preheat prior to and during SLS processing was shown to be beneficial. Preheating the powder significantly influenced the flow and wetting characteristics of the melt. During this work, it was conclusively established that powder cleanliness is of paramount importance for successful layerwise consolidation of metal powders by direct SLS. Sequential trials were conducted to establish optimal bake-out and degas cycles under high vacuum. These cycles agreed well with established practices in the powder metallurgy industry. A study of some of the important transport mechanisms in direct SLS of metals was undertaken to obtain a fundamental understanding of the underlying process physics. This study not only provides an explanation of phenomena observed during SLS processing of a variety of metallic materials but also helps in developing selection schemes for those materials that are most amenable to direct SLS processing. The

  5. Using Ionic Liquids in Selective Hydrocarbon Conversion Processes

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Yongchun; Periana, Roy; Chen, Weiqun; van Duin, Adri; Nielsen, Robert; Shuler, Patrick; Ma, Qisheng; Blanco, Mario; Li, Zaiwei; Oxgaard, Jonas; Cheng, Jihong; Cheung, Sam; Pudar, Sanja

    2009-09-28

    This is the Final Report of the five-year project Using Ionic Liquids in Selective Hydrocarbon Conversion Processes (DE-FC36-04GO14276, July 1, 2004- June 30, 2009), in which we present our major accomplishments with detailed descriptions of our experimental and theoretical efforts. Upon the successful conduction of this project, we have followed our proposed breakdown work structure completing most of the technical tasks. Finally, we have developed and demonstrated several optimized homogenously catalytic methane conversion systems involving applications of novel ionic liquids, which present much more superior performance than the Catalytica system (the best-to-date system) in terms of three times higher reaction rates and longer catalysts lifetime and much stronger resistance to water deactivation. We have developed in-depth mechanistic understandings on the complicated chemistry involved in homogenously catalytic methane oxidation as well as developed the unique yet effective experimental protocols (reactors, analytical tools and screening methodologies) for achieving a highly efficient yet economically feasible and environmentally friendly catalytic methane conversion system. The most important findings have been published, patented as well as reported to DOE in this Final Report and our 20 Quarterly Reports.

  6. Human values in the team leader selection process.

    Science.gov (United States)

    Rovira, Núria; Ozgen, Sibel; Medir, Magda; Tous, Jordi; Alabart, Joan Ramon

    2012-03-01

    The selection process of team leaders is fundamental if the effectiveness of teams is to be guaranteed. Human values have proven to be an important factor in the behaviour of individuals and leaders. The aim of this study is twofold. The first is to validate Schwartz's survey of human values. The second is to determine whether there are any relationships between the values held by individuals and their preferred roles in a team. Human values were measured by the items of the Schwartz Value Survey (SVS) and the preferred roles in a team were identified by the Belbin Self Perception Inventory (BSPI). The two questionnaires were answered by two samples of undergraduate students (183 and 177 students, respectively). As far as the first objective is concerned, Smallest Space Analysis (SSA) was performed at the outset to examine how well the two-dimensional circular structure, as postulated by Schwartz, was represented in the study population. Then, the results of this analysis were compared and contrasted with those of two other published studies; one by Schwartz (2006) and one by Ros and Grad (1991). As for the second objective, Pearson correlation coefficients were computed to assess the associations between the ratings on the SVS survey items and the ratings on the eight team roles as measured by the BSPI.

  7. Simulation analysis of a production process with selected Six Sigma indicators

    Directory of Open Access Journals (Sweden)

    Michał Dobrzyński

    2012-03-01

    Full Text Available Background: Computer technologies allow more and more to model as well as to perform simulation experiments of various processes. The simulation analysis provides a better understanding of the interdependencies between various stages of production processes. Methods: The results of simulation studies were presented, the aim of them was to show the opportunities of the analysis of the process according to the scenarios and variants developed in connection with the qualitative assessment process. The study was based on simulation models developed and programmed for the processing of parts in an automated production line. The results of the conducted simulation experiments were referred to the primary ratios of the system work like the use of machines and other means of production, capacity, number of defects, etc. The analysis of the process was expanded by the qualitative assessment, based on selected ratios used in Six Sigma methodology. Results: The significant influence of the identification of so-called “hidden factories” in the production process on the value of sigma level was observed. Conclusions: The application of Six Sigma methodology and its statistical methods has a significant importance in the estimation and improvement of processes. The identification and the choice of number of inspection points are important for the monitoring and evaluation of the whole process. The obtained results confirmed the earlier assumptions of great importance of "hidden factories". Not revealing them influences significantly the quality of a process.

  8. A BMP selection process based on the granulometry of runoff solids ...

    African Journals Online (AJOL)

    A BMP selection process based on the granulometry of runoff solids in a ... and flow were recorded, in addition to the pollution associated with such flows. ... best management practices using the process selection diagrams is presented.

  9. Radial Domany-Kinzel models with mutation and selection

    Science.gov (United States)

    Lavrentovich, Maxim O.; Korolev, Kirill S.; Nelson, David R.

    2013-01-01

    We study the effect of spatial structure, genetic drift, mutation, and selective pressure on the evolutionary dynamics in a simplified model of asexual organisms colonizing a new territory. Under an appropriate coarse-graining, the evolutionary dynamics is related to the directed percolation processes that arise in voter models, the Domany-Kinzel (DK) model, contact process, and so on. We explore the differences between linear (flat front) expansions and the much less familiar radial (curved front) range expansions. For the radial expansion, we develop a generalized, off-lattice DK model that minimizes otherwise persistent lattice artifacts. With both simulations and analytical techniques, we study the survival probability of advantageous mutants, the spatial correlations between domains of neutral strains, and the dynamics of populations with deleterious mutations. “Inflation” at the frontier leads to striking differences between radial and linear expansions. For a colony with initial radius R0 expanding at velocity v, significant genetic demixing, caused by local genetic drift, occurs only up to a finite time t*=R0/v, after which portions of the colony become causally disconnected due to the inflating perimeter of the expanding front. As a result, the effect of a selective advantage is amplified relative to genetic drift, increasing the survival probability of advantageous mutants. Inflation also modifies the underlying directed percolation transition, introducing novel scaling functions and modifications similar to a finite-size effect. Finally, we consider radial range expansions with deflating perimeters, as might arise from colonization initiated along the shores of an island.

  10. Designing Multi-target Compound Libraries with Gaussian Process Models.

    Science.gov (United States)

    Bieler, Michael; Reutlinger, Michael; Rodrigues, Tiago; Schneider, Petra; Kriegl, Jan M; Schneider, Gisbert

    2016-05-01

    We present the application of machine learning models to selecting G protein-coupled receptor (GPCR)-focused compound libraries. The library design process was realized by ant colony optimization. A proprietary Boehringer-Ingelheim reference set consisting of 3519 compounds tested in dose-response assays at 11 GPCR targets served as training data for machine learning and activity prediction. We compared the usability of the proprietary data with a public data set from ChEMBL. Gaussian process models were trained to prioritize compounds from a virtual combinatorial library. We obtained meaningful models for three of the targets (5-HT2c , MCH, A1), which were experimentally confirmed for 12 of 15 selected and synthesized or purchased compounds. Overall, the models trained on the public data predicted the observed assay results more accurately. The results of this study motivate the use of Gaussian process regression on public data for virtual screening and target-focused compound library design.

  11. Automation of Endmember Pixel Selection in SEBAL/METRIC Model

    Science.gov (United States)

    Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.

    2015-12-01

    The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.

  12. Systematic approach for the identification of process reference models

    CSIR Research Space (South Africa)

    Van Der Merwe, A

    2009-02-01

    Full Text Available Process models are used in different application domains to capture knowledge on the process flow. Process reference models (PRM) are used to capture reusable process models, which should simplify the identification process of process models...

  13. Ada COCOMO and the Ada Process Model

    Science.gov (United States)

    1989-01-01

    language, the use of incremental development, and the use of the Ada process model capitalizing on the strengths of Ada to improve the efficiency of software...development. This paper presents the portions of the revised Ada COCOMO dealing with the effects of Ada and the Ada process model . The remainder of...this section of the paper discusses the objectives of Ada COCOMO. Section 2 describes the Ada Process Model and its overall effects on software

  14. Simulation Modeling of Software Development Processes

    Science.gov (United States)

    Calavaro, G. F.; Basili, V. R.; Iazeolla, G.

    1996-01-01

    A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.

  15. Branching process models of cancer

    CERN Document Server

    Durrett, Richard

    2015-01-01

    This volume develops results on continuous time branching processes and applies them to study rate of tumor growth, extending classic work on the Luria-Delbruck distribution. As a consequence, the authors calculate the probability that mutations that confer resistance to treatment are present at detection and quantify the extent of tumor heterogeneity. As applications, the authors evaluate ovarian cancer screening strategies and give rigorous proofs for results of Heano and Michor concerning tumor metastasis. These notes should be accessible to students who are familiar with Poisson processes and continuous time. Richard Durrett is mathematics professor at Duke University, USA. He is the author of 8 books, over 200 journal articles, and has supervised more than 40 Ph.D. students. Most of his current research concerns the applications of probability to biology: ecology, genetics, and most recently cancer.

  16. A Process Model for Establishing Business Process Crowdsourcing

    OpenAIRE

    Nguyen Hoang Thuan; Pedro Antunes; David Johnstone

    2017-01-01

    Crowdsourcing can be an organisational strategy to distribute work to Internet users and harness innovation, information, capacities, and variety of business endeavours. As crowdsourcing is different from other business strategies, organisations are often unsure as to how to best structure different crowdsourcing activities and integrate them with other organisational business processes. To manage this problem, we design a process model guiding how to establish business process crowdsourcing....

  17. Total Ship Design Process Modeling

    Science.gov (United States)

    2012-04-30

    Microsoft Project® or Primavera ®, and perform process simulations that can investigate risk, cost, and schedule trade-offs. Prior efforts to capture...planning in the face of disruption, delay, and late‐changing  requirements. ADePT is interfaced with  PrimaVera , the AEC  industry favorite program

  18. Selective imitation impairments differentially interact with language processing.

    Science.gov (United States)

    Mengotti, Paola; Corradi-Dell'Acqua, Corrado; Negri, Gioia A L; Ukmar, Maja; Pesavento, Valentina; Rumiati, Raffaella I

    2013-08-01

    Whether motor and linguistic representations of actions share common neural structures has recently been the focus of an animated debate in cognitive neuroscience. Group studies with brain-damaged patients reported association patterns of praxic and linguistic deficits whereas single case studies documented double dissociations between the correct execution of gestures and their comprehension in verbal contexts. When the relationship between language and imitation was investigated, each ability was analysed as a unique process without distinguishing between possible subprocesses. However, recent cognitive models can be successfully used to account for these inconsistencies in the extant literature. In the present study, in 57 patients with left brain damage, we tested whether a deficit at imitating either meaningful or meaningless gestures differentially impinges on three distinct linguistic abilities (comprehension, naming and repetition). Based on the dual-pathway models, we predicted that praxic and linguistic performance would be associated when meaningful gestures are processed, and would dissociate for meaningless gestures. We used partial correlations to assess the association between patients' scores while accounting for potential confounding effects of aspecific factors such age, education and lesion size. We found that imitation of meaningful gestures significantly correlated with patients' performance on naming and repetition (but not on comprehension). This was not the case for the imitation of meaningless gestures. Moreover, voxel-based lesion-symptom mapping analysis revealed that damage to the angular gyrus specifically affected imitation of meaningless gestures, independent of patients' performance on linguistic tests. Instead, damage to the supramarginal gyrus affected not only imitation of meaningful gestures, but also patients' performance on naming and repetition. Our findings clarify the apparent conflict between associations and dissociations

  19. Multiple High-Fidelity Modeling Tools for Metal Additive Manufacturing Process Development Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Despite the rapid commercialization of additive manufacturing technology such as selective laser melting, SLM, there are gaps in process modeling and material...

  20. Modeling and simulation of membrane process

    Science.gov (United States)

    Staszak, Maciej

    2017-06-01

    The article presents the different approaches to polymer membrane mathematical modeling. Traditional models based on experimental physicochemical correlations and balance models are presented in the first part. Quantum and molecular mechanics models are presented as they are more popular for polymer membranes in fuel cells. The initial part is enclosed by neural network models which found their use for different types of processes in polymer membranes. The second part is devoted to models of fluid dynamics. The computational fluid dynamics technique can be divided into solving of Navier-Stokes equations and into Boltzmann lattice models. Both approaches are presented focusing on membrane processes.

  1. GPstuff: Bayesian Modeling with Gaussian Processes

    NARCIS (Netherlands)

    Vanhatalo, J.; Riihimaki, J.; Hartikainen, J.; Jylänki, P.P.; Tolvanen, V.; Vehtari, A.

    2013-01-01

    The GPstuff toolbox is a versatile collection of Gaussian process models and computational tools required for Bayesian inference. The tools include, among others, various inference methods, sparse approximations and model assessment methods.

  2. Online Rule Generation Software Process Model

    National Research Council Canada - National Science Library

    Sudeep Marwaha; Alka Aroa; Satma M C; Rajni Jain; R C Goyal

    2013-01-01

    .... The software process model for rule generation using decision tree classifier refers to the various steps required to be executed for the development of a web based software model for decision rule generation...

  3. Modeling pellet impact drilling process

    OpenAIRE

    Kovalev, Artem Vladimirovich; Ryabchikov, Sergey Yakovlevich; Isaev, Evgeniy Dmitrievich; Ulyanova, Oksana Sergeevna

    2016-01-01

    The paper describes pellet impact drilling which could be used to increase the drilling speed and the rate of penetration when drilling hard rocks. Pellet impact drilling implies rock destruction by metal pellets with high kinetic energy in the immediate vicinity of the earth formation encountered. The pellets are circulated in the bottom hole by a high velocity fluid jet, which is the principle component of the ejector pellet impact drill bit. The experiments conducted has allowed modeling t...

  4. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...

  5. Redox processes and water quality of selected principal aquifer systems

    Science.gov (United States)

    McMahon, P.B.; Chapelle, F.H.

    2008-01-01

    Reduction/oxidation (redox) conditions in 15 principal aquifer (PA) systems of the United States, and their impact on several water quality issues, were assessed from a large data base collected by the National Water-Quality Assessment Program of the USGS. The logic of these assessments was based on the observed ecological succession of electron acceptors such as dissolved oxygen, nitrate, and sulfate and threshold concentrations of these substrates needed to support active microbial metabolism. Similarly, the utilization of solid-phase electron acceptors such as Mn(IV) and Fe(III) is indicated by the production of dissolved manganese and iron. An internally consistent set of threshold concentration criteria was developed and applied to a large data set of 1692 water samples from the PAs to assess ambient redox conditions. The indicated redox conditions then were related to the occurrence of selected natural (arsenic) and anthropogenic (nitrate and volatile organic compounds) contaminants in ground water. For the natural and anthropogenic contaminants assessed in this study, considering redox conditions as defined by this framework of redox indicator species and threshold concentrations explained many water quality trends observed at a regional scale. An important finding of this study was that samples indicating mixed redox processes provide information on redox heterogeneity that is useful for assessing common water quality issues. Given the interpretive power of the redox framework and given that it is relatively inexpensive and easy to measure the chemical parameters included in the framework, those parameters should be included in routine water quality monitoring programs whenever possible.

  6. Selective processes in development: implications for the costs and benefits of phenotypic plasticity.

    Science.gov (United States)

    Snell-Rood, Emilie C

    2012-07-01

    Adaptive phenotypic plasticity, the ability of a genotype to develop a phenotype appropriate to the local environment, allows organisms to cope with environmental variation and has implications for predicting how organisms will respond to rapid, human-induced environmental change. This review focuses on the importance of developmental selection, broadly defined as a developmental process that involves the sampling of a range of phenotypes and feedback from the environment reinforcing high-performing phenotypes. I hypothesize that understanding the degree to which developmental selection underlies plasticity is key to predicting the costs, benefits, and consequences of plasticity. First, I review examples that illustrate that elements of developmental selection are common across the development of many different traits, from physiology and immunity to circulation and behavior. Second, I argue that developmental selection, relative to a fixed strategy or determinate (switch) mechanisms of plasticity, increases the probability that an individual will develop a phenotype best matched to the local environment. However, the exploration and environmental feedback associated with developmental selection is costly in terms of time, energy, and predation risk, resulting in major changes in life history such as increased duration of development and greater investment in individual offspring. Third, I discuss implications of developmental selection as a mechanism of plasticity, from predicting adaptive responses to novel environments to understanding conditions under which genetic assimilation may fuel diversification. Finally, I outline exciting areas of future research, in particular exploring costs of selective processes in the development of traits outside of behavior and modeling developmental selection and evolution in novel environments.

  7. Performance Measurement Model for the Supplier Selection Based on AHP

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2015-10-01

    Full Text Available The performance of the supplier is a crucial factor for the success or failure of any company. Rational and effective decision making in terms of the supplier selection process can help the organization to optimize cost and quality functions. The nature of supplier selection processes is generally complex, especially when the company has a large variety of products and vendors. Over the years, several solutions and methods have emerged for addressing the supplier selection problem (SSP. Experience and studies have shown that there is no best way for evaluating and selecting a specific supplier process, but that it varies from one organization to another. The aim of this research is to demonstrate how a multiple attribute decision making approach can be effectively applied for the supplier selection process.

  8. Interpretive and Formal Models of Discourse Processing.

    Science.gov (United States)

    Bulcock, Jeffrey W.; Beebe, Mona J.

    Distinguishing between interpretive and formal models of discourse processing and between qualitative and quantitative research, this paper argues that formal models are the analogues of interpretive models, and that the two are complementary. It observes that interpretive models of reading are being increasingly derived from qualitative research…

  9. Prediction of Farmers’ Income and Selection of Model ARIMA

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Based on the research technology of scholars’ prediction of farmers’ income and the data of per capita annual net income in rural households in Henan Statistical Yearbook from 1979 to 2009,it is found that time series of farmers’ income is in accordance with I(2)non-stationary process.The order-determination and identification of the model are achieved by adopting the correlogram-based analytical method of Box-Jenkins.On the basis of comparing a group of model properties with different parameters,model ARIMA(4,2,2)is built up.The testing result shows that the residual error of the selected model is white noise and accords with the normal distribution,which can be used to predict farmers’ income.The model prediction indicates that income in rural households will continue to increase from 2009 to 2012 and will reach the value of 2 282.4,2 502.9,2 686.9 and 2 884.5 respectively.The growth speed will go down from fast to slow with weak sustainability.

  10. Evaluation of EOR Processes Using Network Models

    DEFF Research Database (Denmark)

    Larsen, Jens Kjell; Krogsbøll, Anette

    1998-01-01

    The report consists of the following parts: 1) Studies of wetting properties of model fluids and fluid mixtures aimed at an optimal selection of candidates for micromodel experiments. 2) Experimental studies of multiphase transport properties using physical models of porous networks (micromodels)...

  11. Investigation into the Effect of Concentration of Benzotriazole on the Selective Layer Surface in the Chemical Mechanical Planarization Process

    Science.gov (United States)

    Ilie, Filip; Laurian, Tiberiu

    2015-12-01

    During selective layer chemical mechanical planarization (CMP), the surface layer is selectively oxidized and removed. Material removal rate in selective layer CMP depends on the depth of removal, pH of the solution, slurry chemistry, potential, percentage of oxidizer, and the applied load. Benzotriazole (BTA) has been used as a corrosion inhibitor in the CMP process. The role of BTA is to prevent corrosion of a pattern via a chemical reaction that forms a Cu-BTA passive film on the selective-layer surface. This paper focuses on the concentration effect of BTA in the slurry of a selective layer CMP process by measuring the friction force during CMP and the modification of the selective layer films immersed in slurries containing various concentrations of BTA. Additionally; the friction characteristics with the concentration of BTA in the selective layer CMP slurry. The effect of BTA concentration was verified using an empirical model based on the friction energy ( E f).

  12. Astrophysical Model Selection in Gravitational Wave Astronomy

    Science.gov (United States)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  13. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  14. Model and Variable Selection Procedures for Semiparametric Time Series Regression

    Directory of Open Access Journals (Sweden)

    Risa Kato

    2009-01-01

    Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.

  15. Development of a Comprehensive Weld Process Model

    Energy Technology Data Exchange (ETDEWEB)

    Radhakrishnan, B.; Zacharia, T.

    1997-05-01

    This cooperative research and development agreement (CRADA) between Concurrent Technologies Corporation (CTC) and Lockheed Martin Energy Systems (LMES) combines CTC's expertise in the welding area and that of LMES to develop computer models and simulation software for welding processes. This development is of significant impact to the industry, including materials producers and fabricators. The main thrust of the research effort was to develop a comprehensive welding simulation methodology. A substantial amount of work has been done by several researchers to numerically model several welding processes. The primary drawback of most of the existing models is the lack of sound linkages between the mechanistic aspects (e.g., heat transfer, fluid flow, and residual stress) and the metallurgical aspects (e.g., microstructure development and control). A comprehensive numerical model which can be used to elucidate the effect of welding parameters/conditions on the temperature distribution, weld pool shape and size, solidification behavior, and microstructure development, as well as stresses and distortion, does not exist. It was therefore imperative to develop a comprehensive model which would predict all of the above phenomena during welding. The CRADA built upon an already existing three- dimensional (3-D) welding simulation model which was developed by LMES which is capable of predicting weld pool shape and the temperature history in 3-d single-pass welds. However, the model does not account for multipass welds, microstructural evolution, distortion and residual stresses. Additionally, the model requires large resources of computing time, which limits its use for practical applications. To overcome this, CTC and LMES have developed through this CRADA the comprehensive welding simulation model described above. The following technical tasks have been accomplished as part of the CRADA. 1. The LMES welding code has been ported to the Intel Paragon parallel computer at

  16. NVC Based Model for Selecting Effective Requirement Elicitation Technique

    Directory of Open Access Journals (Sweden)

    Md. Rizwan Beg

    2012-10-01

    Full Text Available Requirement Engineering process starts from gathering of requirements i.e.; requirements elicitation. Requirementselicitation (RE is the base building block for a software project and has very high impact onsubsequent design and builds phases as well. Accurately capturing system requirements is the major factorin the failure of most of software projects. Due to the criticality and impact of this phase, it is very importantto perform the requirements elicitation in no less than a perfect manner. One of the most difficult jobsfor elicitor is to select appropriate technique for eliciting the requirement. Interviewing and Interactingstakeholder during Elicitation process is a communication intensive activity involves Verbal and Nonverbalcommunication (NVC. Elicitor should give emphasis to Non-verbal communication along with verbalcommunication so that requirements recorded more efficiently and effectively. In this paper we proposea model in which stakeholders are classified by observing non-verbal communication and use it as a basefor elicitation technique selection. We also propose an efficient plan for requirements elicitation which intendsto overcome on the constraints, faced by elicitor.

  17. Using multilevel models to quantify heterogeneity in resource selection

    Science.gov (United States)

    Wagner, T.; Diefenbach, D.R.; Christensen, S.A.; Norton, A.S.

    2011-01-01

    Models of resource selection are being used increasingly to predict or model the effects of management actions rather than simply quantifying habitat selection. Multilevel, or hierarchical, models are an increasingly popular method to analyze animal resource selection because they impose a relatively weak stochastic constraint to model heterogeneity in habitat use and also account for unequal sample sizes among individuals. However, few studies have used multilevel models to model coefficients as a function of predictors that may influence habitat use at different scales or quantify differences in resource selection among groups. We used an example with white-tailed deer (Odocoileus virginianus) to illustrate how to model resource use as a function of distance to road that varies among deer by road density at the home range scale. We found that deer avoidance of roads decreased as road density increased. Also, we used multilevel models with sika deer (Cervus nippon) and white-tailed deer to examine whether resource selection differed between species. We failed to detect differences in resource use between these two species and showed how information-theoretic and graphical measures can be used to assess how resource use may have differed. Multilevel models can improve our understanding of how resource selection varies among individuals and provides an objective, quantifiable approach to assess differences or changes in resource selection. ?? The Wildlife Society, 2011.

  18. Evolutionary games in a generalized Moran process with arbitrary selection strength and mutation

    Institute of Scientific and Technical Information of China (English)

    Quan Jia; Wang Xian-Jia

    2011-01-01

    By using a generalized fitness-dependent Moran process, an evolutionary model for symmetric 2×2 games in a well-mixed population with a finite size is investigated. In the model, the individuals' payoff accumulating from games is mapped into fitness using an exponent function. Both selection strength β and mutation rate ε are considered. The process is an ergodic birth-death process. Based on the limit distribution of the process, we give the analysis results for which strategy will be favoured when e is small enough. The results depend on not only the payoff matrix of the game, but also on the population size. Especially, we prove that natural selection favours the strategy which is risk-dominant when the population size is large enough. For arbitrary β and ε values, the 'Hawk-Dove' game and the 'Coordinate' game are used to illustrate our model. We give the evolutionary stable strategy (ESS) of the games and compare the results with those of the replicator dynamics in the infinite population. The results are determined by simulation experiments.

  19. Process modelling on a canonical basis[Process modelling; Canonical modelling

    Energy Technology Data Exchange (ETDEWEB)

    Siepmann, Volker

    2006-12-20

    Based on an equation oriented solving strategy, this thesis investigates a new approach to process modelling. Homogeneous thermodynamic state functions represent consistent mathematical models of thermodynamic properties. Such state functions of solely extensive canonical state variables are the basis of this work, as they are natural objective functions in optimisation nodes to calculate thermodynamic equilibrium regarding phase-interaction and chemical reactions. Analytical state function derivatives are utilised within the solution process as well as interpreted as physical properties. By this approach, only a limited range of imaginable process constraints are considered, namely linear balance equations of state variables. A second-order update of source contributions to these balance equations is obtained by an additional constitutive equation system. These equations are general dependent on state variables and first-order sensitivities, and cover therefore practically all potential process constraints. Symbolic computation technology efficiently provides sparsity and derivative information of active equations to avoid performance problems regarding robustness and computational effort. A benefit of detaching the constitutive equation system is that the structure of the main equation system remains unaffected by these constraints, and a priori information allows to implement an efficient solving strategy and a concise error diagnosis. A tailor-made linear algebra library handles the sparse recursive block structures efficiently. The optimisation principle for single modules of thermodynamic equilibrium is extended to host entire process models. State variables of different modules interact through balance equations, representing material flows from one module to the other. To account for reusability and encapsulation of process module details, modular process modelling is supported by a recursive module structure. The second-order solving algorithm makes it

  20. A Network Analysis Model for Selecting Sustainable Technology

    Directory of Open Access Journals (Sweden)

    Sangsung Park

    2015-09-01

    Full Text Available Most companies develop technologies to improve their competitiveness in the marketplace. Typically, they then patent these technologies around the world in order to protect their intellectual property. Other companies may use patented technologies to develop new products, but must pay royalties to the patent holders or owners. Should they fail to do so, this can result in legal disputes in the form of patent infringement actions between companies. To avoid such situations, companies attempt to research and develop necessary technologies before their competitors do so. An important part of this process is analyzing existing patent documents in order to identify emerging technologies. In such analyses, extracting sustainable technology from patent data is important, because sustainable technology drives technological competition among companies and, thus, the development of new technologies. In addition, selecting sustainable technologies makes it possible to plan their R&D (research and development efficiently. In this study, we propose a network model that can be used to select the sustainable technology from patent documents, based on the centrality and degree of a social network analysis. To verify the performance of the proposed model, we carry out a case study using actual patent data from patent databases.

  1. Optimization of process configuration and strain selection for microalgae-based biodiesel production.

    Science.gov (United States)

    Yu, Nan; Dieu, Linus Tao Jie; Harvey, Simon; Lee, Dong-Yup

    2015-10-01

    A mathematical model was developed for the design of microalgae-based biodiesel production system by systematically integrating all the production stages and strain properties. Through the hypothetical case study, the model suggested the most economical system configuration for the selected microalgae strains from the available processes at each stage, thus resulting in the cheapest biodiesel production cost, S$2.66/kg, which is still higher than the current diesel price (S$1.05/kg). Interestingly, the microalgae strain properties, such as lipid content, effective diameter and productivity, were found to be one of the major factors that significantly affect the production cost as well as system configuration.

  2. The site selection process for a spent fuel repository in Finland. Summary report

    Energy Technology Data Exchange (ETDEWEB)

    McEwen, T. [EnvirosQuantiSci (United Kingdom); Aeikaes, T. [Posiva Oy, Helsinki (Finland)

    2000-12-01

    This Summary Report describes the Finnish programme for the selection and characterisation of potential sites for the deep disposal of spent nuclear fuel and explains the process by which Olkiluoto has been selected as the single site proposed for the development of a spent fuel disposal facility. Its aim is to provide an overview of this process, initiated almost twenty years ago, which has entered its final phase. It provides information in three areas: a review of the early site selection criteria, a description of the site selection process, including all the associated site characterisation work, up to the point at which a single site was selected and an outline of the proposed work, in particular that proposed underground, to characterise further the Olkiluoto site. In 1983 the Finnish Government made a policy decision on the management of nuclear waste in which the main goals and milestones for the site selection programme for the deep disposal of spent fuel were presented. According to this decision several site candidates, whose selection was to be based on careful studies of the whole country, should be characterised and the site for the repository selected by the end of the year 2000. This report describes the process by which this policy decision has been achieved. The report begins with a discussion of the definition of the geological and environmental site selection criteria and how they were applied in order to select a small number of sites, five in all, that were to be the subject of the preliminary investigations. The methods used to investigate these sites and the results of these investigations are described, as is the evaluation of the results of these investigations and the process used to discard two of the sites and continue more detailed investigations at the remaining three. The detailed site investigations that commenced in 1993 are described with respect to the overall strategy followed and the investigation techniques applied. The

  3. An Evaluation Model To Select an Integrated Learning System in a Large, Suburban School District.

    Science.gov (United States)

    Curlette, William L.; And Others

    The systematic evaluation process used in Georgia's DeKalb County School System to purchase comprehensive instructional software--an integrated learning system (ILS)--is described, and the decision-making model for selection is presented. Selection and implementation of an ILS were part of an instructional technology plan for the DeKalb schools…

  4. An Extension to the Weibull Process Model

    Science.gov (United States)

    1981-11-01

    Subt5l . TYPE OF REPORT & PERIOD COVERED AN EXTENSION+TO THE WEIBULL PROCESS MODEL 6. PERFORMING O’G. REPORT NUMBER I. AuTHOR() S. CONTRACT OR GRANT...indicatinq its imrportance to applications. 4 AN EXTENSION TO TE WEIBULL PROCESS MODEL 1. INTRODUCTION Recent papers by Bain and Engelhardt (1980)1 and Crow

  5. Bayesian Model Selection for LISA Pathfinder

    CERN Document Server

    Karnesis, Nikolaos; Sopuerta, Carlos F; Gibert, Ferran; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Ferraioli, Luigi; Hewitson, Martin; Hueller, Mauro; Korsakova, Natalia; Plagnol, Eric; Vitale, and Stefano

    2013-01-01

    The main goal of the LISA Pathfinder (LPF) mission is to fully characterize the acceleration noise models and to test key technologies for future space-based gravitational-wave observatories similar to the LISA/eLISA concept. The Data Analysis (DA) team has developed complex three-dimensional models of the LISA Technology Package (LTP) experiment on-board LPF. These models are used for simulations, but more importantly, they will be used for parameter estimation purposes during flight operations. One of the tasks of the DA team is to identify the physical effects that contribute significantly to the properties of the instrument noise. A way of approaching to this problem is to recover the essential parameters of the LTP which describe the data. Thus, we want to define the simplest model that efficiently explains the observations. To do so, adopting a Bayesian framework, one has to estimate the so-called Bayes Factor between two competing models. In our analysis, we use three main different methods to estimate...

  6. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  7. Hybrid modelling of anaerobic wastewater treatment processes.

    Science.gov (United States)

    Karama, A; Bernard, O; Genovesi, A; Dochain, D; Benhammou, A; Steyer, J P

    2001-01-01

    This paper presents a hybrid approach for the modelling of an anaerobic digestion process. The hybrid model combines a feed-forward network, describing the bacterial kinetics, and the a priori knowledge based on the mass balances of the process components. We have considered an architecture which incorporates the neural network as a static model of unmeasured process parameters (kinetic growth rate) and an integrator for the dynamic representation of the process using a set of dynamic differential equations. The paper contains a description of the neural network component training procedure. The performance of this approach is illustrated with experimental data.

  8. A Review Paper : Noise Models in Digital Image Processing

    Directory of Open Access Journals (Sweden)

    Ajay Kumar Boyat

    2015-04-01

    Full Text Available Noise is always presents in digital images during image acquisition, coding, transmission, and processing steps. Noise is very difficult to remove it from the digital images without the prior knowledge of noise model. That is why, review of noise models are essential in the study of image denoising techniques. In this paper, we express a brief overview of various noise models. These noise models can be selected by analysis of their origin. In this way, we present a complete and quantitative analysis of noise models available in digital images.

  9. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  10. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  11. Development of SPAWM: selection program for available watershed models.

    Science.gov (United States)

    Cho, Yongdeok; Roesner, Larry A

    2014-01-01

    A selection program for available watershed models (also known as SPAWM) was developed. Thirty-three commonly used watershed models were analyzed in depth and classified in accordance to their attributes. These attributes consist of: (1) land use; (2) event or continuous; (3) time steps; (4) water quality; (5) distributed or lumped; (6) subsurface; (7) overland sediment; and (8) best management practices. Each of these attributes was further classified into sub-attributes. Based on user selected sub-attributes, the most appropriate watershed model is selected from the library of watershed models. SPAWM is implemented using Excel Visual Basic and is designed for use by novices as well as by experts on watershed modeling. It ensures that the necessary sub-attributes required by the user are captured and made available in the selected watershed model.

  12. Parametric or nonparametric? A parametricness index for model selection

    CERN Document Server

    Liu, Wei; 10.1214/11-AOS899

    2012-01-01

    In model selection literature, two classes of criteria perform well asymptotically in different situations: Bayesian information criterion (BIC) (as a representative) is consistent in selection when the true model is finite dimensional (parametric scenario); Akaike's information criterion (AIC) performs well in an asymptotic efficiency when the true model is infinite dimensional (nonparametric scenario). But there is little work that addresses if it is possible and how to detect the situation that a specific model selection problem is in. In this work, we differentiate the two scenarios theoretically under some conditions. We develop a measure, parametricness index (PI), to assess whether a model selected by a potentially consistent procedure can be practically treated as the true model, which also hints on AIC or BIC is better suited for the data for the goal of estimating the regression function. A consequence is that by switching between AIC and BIC based on the PI, the resulting regression estimator is si...

  13. Boosting model performance and interpretation by entangling preprocessing selection and variable selection.

    Science.gov (United States)

    Gerretzen, Jan; Szymańska, Ewa; Bart, Jacob; Davies, Antony N; van Manen, Henk-Jan; van den Heuvel, Edwin R; Jansen, Jeroen J; Buydens, Lutgarde M C

    2016-09-28

    The aim of data preprocessing is to remove data artifacts-such as a baseline, scatter effects or noise-and to enhance the contextually relevant information. Many preprocessing methods exist to deliver one or more of these benefits, but which method or combination of methods should be used for the specific data being analyzed is difficult to select. Recently, we have shown that a preprocessing selection approach based on Design of Experiments (DoE) enables correct selection of highly appropriate preprocessing strategies within reasonable time frames. In that approach, the focus was solely on improving the predictive performance of the chemometric model. This is, however, only one of the two relevant criteria in modeling: interpretation of the model results can be just as important. Variable selection is often used to achieve such interpretation. Data artifacts, however, may hamper proper variable selection by masking the true relevant variables. The choice of preprocessing therefore has a huge impact on the outcome of variable selection methods and may thus hamper an objective interpretation of the final model. To enhance such objective interpretation, we here integrate variable selection into the preprocessing selection approach that is based on DoE. We show that the entanglement of preprocessing selection and variable selection not only improves the interpretation, but also the predictive performance of the model. This is achieved by analyzing several experimental data sets of which the true relevant variables are available as prior knowledge. We show that a selection of variables is provided that complies more with the true informative variables compared to individual optimization of both model aspects. Importantly, the approach presented in this work is generic. Different types of models (e.g. PCR, PLS, …) can be incorporated into it, as well as different variable selection methods and different preprocessing methods, according to the taste and experience of

  14. Quantile hydrologic model selection and model structure deficiency assessment: 2. Applications

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    Quantile hydrologic model selection and structure deficiency assessment is applied in three case studies. The performance of quantile model selection problem is rigorously evaluated using a model structure on the French Broad river basin data set. The case study shows that quantile model selection

  15. VARTM Process Modeling of Aerospace Composite Structures

    Science.gov (United States)

    Song, Xiao-Lan; Grimsley, Brian W.; Hubert, Pascal; Cano, Roberto J.; Loos, Alfred C.

    2003-01-01

    A three-dimensional model was developed to simulate the VARTM composite manufacturing process. The model considers the two important mechanisms that occur during the process: resin flow, and compaction and relaxation of the preform. The model was used to simulate infiltration of a carbon preform with an epoxy resin by the VARTM process. The model predicted flow patterns and preform thickness changes agreed qualitatively with the measured values. However, the predicted total infiltration times were much longer than measured most likely due to the inaccurate preform permeability values used in the simulation.

  16. Declarative business process modelling: principles and modelling languages

    Science.gov (United States)

    Goedertier, Stijn; Vanthienen, Jan; Caron, Filip

    2015-02-01

    The business process literature has proposed a multitude of business process modelling approaches or paradigms, each in response to a different business process type with a unique set of requirements. Two polar paradigms, i.e. the imperative and the declarative paradigm, appear to define the extreme positions on the paradigm spectrum. While imperative approaches focus on explicitly defining how an organisational goal should be reached, the declarative approaches focus on the directives, policies and regulations restricting the potential ways to achieve the organisational goal. In between, a variety of hybrid-paradigms can be distinguished, e.g. the advanced and adaptive case management. This article focuses on the less-exposed declarative approach on process modelling. An outline of the declarative process modelling and the modelling approaches is presented, followed by an overview of the observed declarative process modelling principles and an evaluation of the declarative process modelling approaches.

  17. Process and Context in Choice Models

    DEFF Research Database (Denmark)

    Ben-Akiva, Moshe; Palma, André de; McFadden, Daniel

    2012-01-01

    We develop a general framework that extends choice models by including an explicit representation of the process and context of decision making. Process refers to the steps involved in decision making. Context refers to factors affecting the process, focusing in this paper on social networks. The...

  18. Will Rule based BPM obliterate Process Models?

    NARCIS (Netherlands)

    Joosten, S.; Joosten, H.J.M.

    2007-01-01

    Business rules can be used directly for controlling business processes, without reference to a business process model. In this paper we propose to use business rules to specify both business processes and the software that supports them. Business rules expressed in smart mathematical notations bring

  19. Adapting AIC to conditional model selection

    NARCIS (Netherlands)

    M. van Ommen (Matthijs)

    2012-01-01

    textabstractIn statistical settings such as regression and time series, we can condition on observed information when predicting the data of interest. For example, a regression model explains the dependent variables $y_1, \\ldots, y_n$ in terms of the independent variables $x_1, \\ldots, x_n$.

  20. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn;

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

  1. A hybrid analytical network process and fuzzy goal programming for supplier selection: A case study of auto part maker

    OpenAIRE

    Hesam Zande Hesami; Mohammad Ali Afshari; Seyed Ali Ayazi; Javad Siahkali Moradi

    2011-01-01

    The aim of this research is to present a hybrid model to select auto part suppliers. The proposed method of this paper uses factor analysis to find the most influencing factors on part maker selection and the results are validated using different statistical tests such as Cronbach's Alpha and Kaiser-Meyer.The hybrid model uses analytical network process to rank different part maker suppliers and fuzzy goal programming to choose the appropriate alternative among various choices. The implementa...

  2. Forging process modeling of cone-shaped posts

    Institute of Scientific and Technical Information of China (English)

    Xuefeng Liu; Lingyun Wang; Li Zhang

    2004-01-01

    Using the rigid visco-plastic Finite Element Method (FEM), the process of forging for long cone-shaped posts made of aluminum alloys was modeled and the corresponding distributions of the field variables were obtained based on considering aberrance of grids, dynamic boundary conditions, non-stable process, coupled thermo-mechanical behavior and other special problems.The difficulties in equipment selection and die analysis caused by the long cone shape of post, as well as by pressure calculation were solved.

  3. Measures of Quality in Business Process Modelling

    Directory of Open Access Journals (Sweden)

    Radek Hronza

    2015-06-01

    Full Text Available Business process modelling and analysing is undoubtedly one of the most important parts of Applied (Business Informatics. Quality of business process models (diagrams is crucial for any purpose in this area. The goal of a process analyst’s work is to create generally understandable, explicit and error free models. If a process is properly described, created models can be used as an input into deep analysis and optimization. It can be assumed that properly designed business process models (similarly as in the case of correctly written algorithms contain characteristics that can be mathematically described. Besides it will be possible to create a tool that will help process analysts to design proper models. As part of this review will be conducted systematic literature review in order to find and analyse business process model’s design and business process model’s quality measures. It was found that mentioned area had already been the subject of research investigation in the past. Thirty-three suitable scietific publications and twenty-two quality measures were found. Analysed scientific publications and existing quality measures do not reflect all important attributes of business process model’s clarity, simplicity and completeness. Therefore it would be appropriate to add new measures of quality.

  4. Modeling of percolation process in hemicellulose hydrolysis.

    Science.gov (United States)

    Cahela, D R; Lee, Y Y; Chambers, R P

    1983-01-01

    A mathematical model was developed for a percolation reactor in connection with consecutive first-order reactions. The model was designed to simulated acid-catalyzed cellulose or hemicellulose hydrolysis. The modeling process resulted in an analytically derived reactor equation, including mass-transfer effects, which was found to be useful in process desing and reactor optimization. The modedl was verified by experimental data obtained from hemicellulose hydrolysis.

  5. Hybrid Sludge Modeling in Water Treatment Processes

    OpenAIRE

    Brenda, Marian

    2015-01-01

    Sludge occurs in many waste water and drinking water treatment processes. The numeric modeling of sludge is therefore crucial for developing and optimizing water treatment processes. Numeric single-phase sludge models mainly include settling and viscoplastic behavior. Even though many investigators emphasize the importance of modeling the rheology of sludge for good simulation results, it is difficult to measure, because of settling and the viscoplastic behavior. In this thesis, a new method ...

  6. On the computational modeling of FSW processes

    OpenAIRE

    Agelet de Saracibar Bosch, Carlos; Chiumenti, Michèle; Santiago, Diego de; Cervera Ruiz, Miguel; Dialami, Narges; Lombera, Guillermo

    2010-01-01

    This work deals with the computational modeling and numerical simulation of Friction Stir Welding (FSW) processes. Here a quasi-static, transient, mixed stabilized Eulerian formulation is used. Norton-Hoff and Sheppard-Wright rigid thermoplastic material models have been considered. A product formula algorithm, leading to a staggered solution scheme, has been used. The model has been implemented into the in-house developed FE code COMET. Results obtained in the simulation of FSW process are c...

  7. Nanowire growth process modeling and reliability models for nanodevices

    Science.gov (United States)

    Fathi Aghdam, Faranak

    . This work is an early attempt that uses a physical-statistical modeling approach to studying selective nanowire growth for the improvement of process yield. In the second research work, the reliability of nano-dielectrics is investigated. As electronic devices get smaller, reliability issues pose new challenges due to unknown underlying physics of failure (i.e., failure mechanisms and modes). This necessitates new reliability analysis approaches related to nano-scale devices. One of the most important nano-devices is the transistor that is subject to various failure mechanisms. Dielectric breakdown is known to be the most critical one and has become a major barrier for reliable circuit design in nano-scale. Due to the need for aggressive downscaling of transistors, dielectric films are being made extremely thin, and this has led to adopting high permittivity (k) dielectrics as an alternative to widely used SiO2 in recent years. Since most time-dependent dielectric breakdown test data on bilayer stacks show significant deviations from a Weibull trend, we have proposed two new approaches to modeling the time to breakdown of bi-layer high-k dielectrics. In the first approach, we have used a marked space-time self-exciting point process to model the defect generation rate. A simulation algorithm is used to generate defects within the dielectric space, and an optimization algorithm is employed to minimize the Kullback-Leibler divergence between the empirical distribution obtained from the real data and the one based on the simulated data to find the best parameter values and to predict the total time to failure. The novelty of the presented approach lies in using a conditional intensity for trap generation in dielectric that is a function of time, space and size of the previous defects. In addition, in the second approach, a k-out-of-n system framework is proposed to estimate the total failure time after the generation of more than one soft breakdown.

  8. Model selection in systems biology depends on experimental design.

    Science.gov (United States)

    Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H

    2014-06-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.

  9. Model-based design of peptide chromatographic purification processes.

    Science.gov (United States)

    Gétaz, David; Stroehlein, Guido; Butté, Alessandro; Morbidelli, Massimo

    2013-04-05

    In this work we present a general procedure for the model-based optimization of a polypeptide crude mixture purification process through its application to a case of industrial relevance. This is done to show how much modeling can be beneficial to optimize complex chromatographic processes in the industrial environment. The target peptide elution profile was modeled with a two sites adsorption equilibrium isotherm exhibiting two inflection points. The variation of the isotherm parameters with the modifier concentration was accounted for. The adsorption isotherm parameters of the target peptide were obtained by the inverse method. The elution of the impurities was approximated by lumping them into pseudo-impurities and by regressing their adsorption isotherm parameters directly as a function of the corresponding parameters of the target peptide. After model calibration and validation by comparison with suitable experimental data, Pareto optimizations of the process were carried out so as to select the optimal batch process.

  10. Selected Tools for Risk Analysis in Logistics Processes

    Science.gov (United States)

    Kulińska, Ewa

    2012-03-01

    As each organization aims at managing effective logistics processes, risk factors can and should be controlled through proper system of risk management. Implementation of complex approach to risk management allows for the following: - evaluation of significant risk groups associated with logistics processes implementation, - composition of integrated strategies of risk management, - composition of tools for risk analysis in logistics processes.

  11. Neural underpinnings of decision strategy selection: a review and a theoretical model

    Directory of Open Access Journals (Sweden)

    Szymon Wichary

    2016-11-01

    Full Text Available In multi-attribute choice, decision makers use various decision strategies to arrive at the final choice. What are the neural mechanisms underlying decision strategy selection? The first goal of this paper is to provide a literature review on the neural underpinnings and cognitive models of decision strategy selection and thus set the stage for a unifying neurocognitive model of this process. The second goal is to outline such a unifying, mechanistic model that can explain the impact of noncognitive factors (e.g. affect, stress on strategy selection. To this end, we review the evidence for the factors influencing strategy selection, the neural basis of strategy use and the cognitive models explaining this process. We also present the neurocognitive Bottom-Up Model of Strategy Selection (BUMSS. The model assumes that the use of the rational, normative Weighted Additive strategy and the boundedly rational heuristic Take The Best can be explained by one unifying, neurophysiologically plausible mechanism, based on the interaction of the frontoparietal network, orbitofrontal cortex, anterior cingulate cortex and the brainstem nucleus locus coeruleus. According to BUMSS, there are three processes that form the bottom-up mechanism of decision strategy selection and lead to the final choice: 1 cue weight computation, 2 gain modulation, and 3 weighted additive evaluation of alternatives. We discuss how these processes might be implemented in the brain, and how this knowledge allows us to formulate novel predictions linking strategy use and neurophysiological indices.

  12. Neural Underpinnings of Decision Strategy Selection: A Review and a Theoretical Model

    Science.gov (United States)

    Wichary, Szymon; Smolen, Tomasz

    2016-01-01

    In multi-attribute choice, decision makers use decision strategies to arrive at the final choice. What are the neural mechanisms underlying decision strategy selection? The first goal of this paper is to provide a literature review on the neural underpinnings and cognitive models of decision strategy selection and thus set the stage for a neurocognitive model of this process. The second goal is to outline such a unifying, mechanistic model that can explain the impact of noncognitive factors (e.g., affect, stress) on strategy selection. To this end, we review the evidence for the factors influencing strategy selection, the neural basis of strategy use and the cognitive models of this process. We also present the Bottom-Up Model of Strategy Selection (BUMSS). The model assumes that the use of the rational Weighted Additive strategy and the boundedly rational heuristic Take The Best can be explained by one unifying, neurophysiologically plausible mechanism, based on the interaction of the frontoparietal network, orbitofrontal cortex, anterior cingulate cortex and the brainstem nucleus locus coeruleus. According to BUMSS, there are three processes that form the bottom-up mechanism of decision strategy selection and lead to the final choice: (1) cue weight computation, (2) gain modulation, and (3) weighted additive evaluation of alternatives. We discuss how these processes might be implemented in the brain, and how this knowledge allows us to formulate novel predictions linking strategy use and neural signals. PMID:27877103

  13. Extending Model Checking to Object Process Validation

    NARCIS (Netherlands)

    Rein, van H.

    2002-01-01

    Object-oriented techniques allow the gathering and modelling of system requirements in terms of an application area. The expression of data and process models at that level is a great asset in communication with non-technical people in that area, but it does not necessarily lead to consistent models

  14. Preform Characterization in VARTM Process Model Development

    Science.gov (United States)

    Grimsley, Brian W.; Cano, Roberto J.; Hubert, Pascal; Loos, Alfred C.; Kellen, Charles B.; Jensen, Brian J.

    2004-01-01

    Vacuum-Assisted Resin Transfer Molding (VARTM) is a Liquid Composite Molding (LCM) process where both resin injection and fiber compaction are achieved under pressures of 101.3 kPa or less. Originally developed over a decade ago for marine composite fabrication, VARTM is now considered a viable process for the fabrication of aerospace composites (1,2). In order to optimize and further improve the process, a finite element analysis (FEA) process model is being developed to include the coupled phenomenon of resin flow, preform compaction and resin cure. The model input parameters are obtained from resin and fiber-preform characterization tests. In this study, the compaction behavior and the Darcy permeability of a commercially available carbon fabric are characterized. The resulting empirical model equations are input to the 3- Dimensional Infiltration, version 5 (3DINFILv.5) process model to simulate infiltration of a composite panel.

  15. Asset pricing model selection: Indonesian Stock Exchange

    OpenAIRE

    Pasaribu, Rowland Bismark Fernando

    2010-01-01

    The Capital Asset Pricing Model (CAPM) has dominated finance theory for over thirty years; it suggests that the market beta alone is sufficient to explain stock returns. However evidence shows that the cross-section of stock returns cannot be described solely by the one-factor CAPM. Therefore, the idea is to add other factors in order to complete the beta in explaining the price movements in the stock exchange. The Arbitrage Pricing Theory (APT) has been proposed as the first multifactor succ...

  16. A mixed model reduction method for preserving selected physical information

    Science.gov (United States)

    Zhang, Jing; Zheng, Gangtie

    2017-03-01

    A new model reduction method in the frequency domain is presented. By mixedly using the model reduction techniques from both the time domain and the frequency domain, the dynamic model is condensed to selected physical coordinates, and the contribution of slave degrees of freedom is taken as a modification to the model in the form of effective modal mass of virtually constrained modes. The reduced model can preserve the physical information related to the selected physical coordinates such as physical parameters and physical space positions of corresponding structure components. For the cases of non-classical damping, the method is extended to the model reduction in the state space but still only contains the selected physical coordinates. Numerical results are presented to validate the method and show the effectiveness of the model reduction.

  17. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions,in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform l1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  18. Piecewise deterministic processes in biological models

    CERN Document Server

    Rudnicki, Ryszard

    2017-01-01

    This book presents a concise introduction to piecewise deterministic Markov processes (PDMPs), with particular emphasis on their applications to biological models. Further, it presents examples of biological phenomena, such as gene activity and population growth, where different types of PDMPs appear: continuous time Markov chains, deterministic processes with jumps, processes with switching dynamics, and point processes. Subsequent chapters present the necessary tools from the theory of stochastic processes and semigroups of linear operators, as well as theoretical results concerning the long-time behaviour of stochastic semigroups induced by PDMPs and their applications to biological models. As such, the book offers a valuable resource for mathematicians and biologists alike. The first group will find new biological models that lead to interesting and often new mathematical questions, while the second can observe how to include seemingly disparate biological processes into a unified mathematical theory, and...

  19. Software-Engineering Process Simulation (SEPS) model

    Science.gov (United States)

    Lin, C. Y.; Abdel-Hamid, T.; Sherif, J. S.

    1992-01-01

    The Software Engineering Process Simulation (SEPS) model is described which was developed at JPL. SEPS is a dynamic simulation model of the software project development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life cycle development activities and management decision making processes. The model is designed to be a planning tool to examine tradeoffs of cost, schedule, and functionality, and to test the implications of different managerial policies on a project's outcome. Furthermore, SEPS will enable software managers to gain a better understanding of the dynamics of software project development and perform postmodern assessments.

  20. Selection of probability based weighting models for Boolean retrieval system

    Energy Technology Data Exchange (ETDEWEB)

    Ebinuma, Y. (Japan Atomic Energy Research Inst., Tokai, Ibaraki. Tokai Research Establishment)

    1981-09-01

    Automatic weighting models based on probability theory were studied if they can be applied to boolean search logics including logical sum. The INIS detabase was used for searching of one particular search formula. Among sixteen models three with good ranking performance were selected. These three models were further applied to searching of nine search formulas in the same database. It was found that two models among them show slightly better average ranking performance while the other model, the simplest one, seems also practical.

  1. Online Learning of Hierarchical Pitman-Yor Process Mixture of Generalized Dirichlet Distributions With Feature Selection.

    Science.gov (United States)

    Fan, Wentao; Sallay, Hassen; Bouguila, Nizar

    2016-06-09

    In this paper, a novel statistical generative model based on hierarchical Pitman-Yor process and generalized Dirichlet distributions (GDs) is presented. The proposed model allows us to perform joint clustering and feature selection thanks to the interesting properties of the GD distribution. We develop an online variational inference algorithm, formulated in terms of the minimization of a Kullback-Leibler divergence, of our resulting model that tackles the problem of learning from high-dimensional examples. This variational Bayes formulation allows simultaneously estimating the parameters, determining the model's complexity, and selecting the appropriate relevant features for the clustering structure. Moreover, the proposed online learning algorithm allows data instances to be processed in a sequential manner, which is critical for large-scale and real-time applications. Experiments conducted using challenging applications, namely, scene recognition and video segmentation, where our approach is viewed as an unsupervised technique for visual learning in high-dimensional spaces, showed that the proposed approach is suitable and promising.

  2. Model Selection Through Sparse Maximum Likelihood Estimation

    CERN Document Server

    Banerjee, Onureena; D'Aspremont, Alexandre

    2007-01-01

    We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...

  3. Fuel Conditioning Facility Electrorefiner Process Model

    Energy Technology Data Exchange (ETDEWEB)

    DeeEarl Vaden

    2005-10-01

    The Fuel Conditioning Facility at the Idaho National Laboratory processes spent nuclear fuel from the Experimental Breeder Reactor II using electro-metallurgical treatment. To process fuel without waiting for periodic sample analyses to assess process conditions, an electrorefiner process model predicts the composition of the electrorefiner inventory and effluent streams. For the chemical equilibrium portion of the model, the two common methods for solving chemical equilibrium problems, stoichiometric and non stoichiometric, were investigated. In conclusion, the stoichiometric method produced equilibrium compositions close to the measured results whereas the non stoichiometric method did not.

  4. Engineered Barrier System Degradation, Flow, and Transport Process Model Report

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2000-07-17

    The Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is one of nine PMRs supporting the Total System Performance Assessment (TSPA) being developed by the Yucca Mountain Project for the Site Recommendation Report (SRR). The EBS PMR summarizes the development and abstraction of models for processes that govern the evolution of conditions within the emplacement drifts of a potential high-level nuclear waste repository at Yucca Mountain, Nye County, Nevada. Details of these individual models are documented in 23 supporting Analysis/Model Reports (AMRs). Nineteen of these AMRs are for process models, and the remaining 4 describe the abstraction of results for application in TSPA. The process models themselves cluster around four major topics: ''Water Distribution and Removal Model, Physical and Chemical Environment Model, Radionuclide Transport Model, and Multiscale Thermohydrologic Model''. One AMR (Engineered Barrier System-Features, Events, and Processes/Degradation Modes Analysis) summarizes the formal screening analysis used to select the Features, Events, and Processes (FEPs) included in TSPA and those excluded from further consideration. Performance of a potential Yucca Mountain high-level radioactive waste repository depends on both the natural barrier system (NBS) and the engineered barrier system (EBS) and on their interactions. Although the waste packages are generally considered as components of the EBS, the EBS as defined in the EBS PMR includes all engineered components outside the waste packages. The principal function of the EBS is to complement the geologic system in limiting the amount of water contacting nuclear waste. A number of alternatives were considered by the Project for different EBS designs that could provide better performance than the design analyzed for the Viability Assessment. The design concept selected was Enhanced Design Alternative II (EDA II).

  5. Sensitivity of resource selection and connectivity models to landscape definition

    Science.gov (United States)

    Katherine A. Zeller; Kevin McGarigal; Samuel A. Cushman; Paul Beier; T. Winston Vickers; Walter M. Boyce

    2017-01-01

    Context: The definition of the geospatial landscape is the underlying basis for species-habitat models, yet sensitivity of habitat use inference, predicted probability surfaces, and connectivity models to landscape definition has received little attention. Objectives: We evaluated the sensitivity of resource selection and connectivity models to four landscape...

  6. MULTI-SCALE GAUSSIAN PROCESSES MODEL

    Institute of Scientific and Technical Information of China (English)

    Zhou Yatong; Zhang Taiyi; Li Xiaohe

    2006-01-01

    A novel model named Multi-scale Gaussian Processes (MGP) is proposed. Motivated by the ideas of multi-scale representations in the wavelet theory, in the new model, a Gaussian process is represented at a scale by a linear basis that is composed of a scale function and its different translations. Finally the distribution of the targets of the given samples can be obtained at different scales. Compared with the standard Gaussian Processes (GP) model, the MGP model can control its complexity conveniently just by adjusting the scale parameter. So it can trade-off the generalization ability and the empirical risk rapidly. Experiments verify the feasibility of the MGP model, and exhibit that its performance is superior to the GP model if appropriate scales are chosen.

  7. Hybrid modelling of a sugar boiling process

    CERN Document Server

    Lauret, Alfred Jean Philippe; Gatina, Jean Claude

    2012-01-01

    The first and maybe the most important step in designing a model-based predictive controller is to develop a model that is as accurate as possible and that is valid under a wide range of operating conditions. The sugar boiling process is a strongly nonlinear and nonstationary process. The main process nonlinearities are represented by the crystal growth rate. This paper addresses the development of the crystal growth rate model according to two approaches. The first approach is classical and consists of determining the parameters of the empirical expressions of the growth rate through the use of a nonlinear programming optimization technique. The second is a novel modeling strategy that combines an artificial neural network (ANN) as an approximator of the growth rate with prior knowledge represented by the mass balance of sucrose crystals. The first results show that the first type of model performs local fitting while the second offers a greater flexibility. The two models were developed with industrial data...

  8. Probabilistic models of language processing and acquisition.

    Science.gov (United States)

    Chater, Nick; Manning, Christopher D

    2006-07-01

    Probabilistic methods are providing new explanatory approaches to fundamental cognitive science questions of how humans structure, process and acquire language. This review examines probabilistic models defined over traditional symbolic structures. Language comprehension and production involve probabilistic inference in such models; and acquisition involves choosing the best model, given innate constraints and linguistic and other input. Probabilistic models can account for the learning and processing of language, while maintaining the sophistication of symbolic models. A recent burgeoning of theoretical developments and online corpus creation has enabled large models to be tested, revealing probabilistic constraints in processing, undermining acquisition arguments based on a perceived poverty of the stimulus, and suggesting fruitful links with probabilistic theories of categorization and ambiguity resolution in perception.

  9. A genetic algorithm based global search strategy for population pharmacokinetic/pharmacodynamic model selection.

    Science.gov (United States)

    Sale, Mark; Sherer, Eric A

    2015-01-01

    The current algorithm for selecting a population pharmacokinetic/pharmacodynamic model is based on the well-established forward addition/backward elimination method. A central strength of this approach is the opportunity for a modeller to continuously examine the data and postulate new hypotheses to explain observed biases. This algorithm has served the modelling community well, but the model selection process has essentially remained unchanged for the last 30 years. During this time, more robust approaches to model selection have been made feasible by new technology and dramatic increases in computation speed. We review these methods, with emphasis on genetic algorithm approaches and discuss the role these methods may play in population pharmacokinetic/pharmacodynamic model selection.

  10. Eligibility Worker Selection Process: Biographical Inventory Validation Study.

    Science.gov (United States)

    Darany, Theodore; And Others

    One way for agencies to reduce fiscal stress is to minimize employee turnover. A project undertaken by San Bernardino County (California) to reduce employee turnover through the development, validation, and use of a non-traditional worker selection instrument (biographical inventory) is described. This project was aimed at the specific…

  11. Cultural Influence on Selective Attention Processes among Nigerian Adolescents.

    Science.gov (United States)

    Uba, Anselm

    Three experiments in auditory selective attention form the basis of this investigation of cross-cultural differences among the ethnic groups of the Ibo and Yoruba adolescents of Nigeria. A sample of 200 16-year-olds were randomly drawn from four secondary schools. Yoruba adolescents showed superior performance in a task-involving the repetition of…

  12. A selection-quotient process for packed word Hopf algebra

    CERN Document Server

    Duchamp, G H E; Tanasa, A

    2013-01-01

    In this paper, we define a Hopf algebra structure on the vector space spanned by packed words using a selection-quotient coproduct. We show that this algebra is free on its irreducible packed words. Finally, we give some brief explanations on the Maple codes we have used.

  13. A Working Model of Natural Selection Illustrated by Table Tennis

    Science.gov (United States)

    Dinc, Muhittin; Kilic, Selda; Aladag, Caner

    2013-01-01

    Natural selection is one of the most important topics in biology and it helps to clarify the variety and complexity of organisms. However, students in almost every stage of education find it difficult to understand the mechanism of natural selection and they can develop misconceptions about it. This article provides an active model of natural…

  14. Elementary Teachers' Selection and Use of Visual Models

    Science.gov (United States)

    Lee, Tammy D.; Gail Jones, M.

    2017-07-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  15. Fluctuating selection models and McDonald-Kreitman type analyses.

    Directory of Open Access Journals (Sweden)

    Toni I Gossmann

    Full Text Available It is likely that the strength of selection acting upon a mutation varies through time due to changes in the environment. However, most population genetic theory assumes that the strength of selection remains constant. Here we investigate the consequences of fluctuating selection pressures on the quantification of adaptive evolution using McDonald-Kreitman (MK style approaches. In agreement with previous work, we show that fluctuating selection can generate evidence of adaptive evolution even when the expected strength of selection on a mutation is zero. However, we also find that the mutations, which contribute to both polymorphism and divergence tend, on average, to be positively selected during their lifetime, under fluctuating selection models. This is because mutations that fluctuate, by chance, to positive selected values, tend to reach higher frequencies in the population than those that fluctuate towards negative values. Hence the evidence of positive adaptive evolution detected under a fluctuating selection model by MK type approaches is genuine since fixed mutations tend to be advantageous on average during their lifetime. Never-the-less we show that methods tend to underestimate the rate of adaptive evolution when selection fluctuates.

  16. The Optimal Portfolio Selection Model under g -Expectation

    National Research Council Canada - National Science Library

    Li Li

    2014-01-01

      This paper solves the optimal portfolio selection model under the framework of the prospect theory proposed by Kahneman and Tversky in the 1970s with decision rule replaced by the g -expectation introduced by Peng...

  17. Robust Decision-making Applied to Model Selection

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Laboratory

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  18. Information-theoretic model selection applied to supernovae data

    CERN Document Server

    Biesiada, M

    2007-01-01

    There are several different theoretical ideas invoked to explain the dark energy with relatively little guidance of which one of them might be right. Therefore the emphasis of ongoing and forthcoming research in this field shifts from estimating specific parameters of cosmological model to the model selection. In this paper we apply information-theoretic model selection approach based on Akaike criterion as an estimator of Kullback-Leibler entropy. In particular, we present the proper way of ranking the competing models based on Akaike weights (in Bayesian language - posterior probabilities of the models). Out of many particular models of dark energy we focus on four: quintessence, quintessence with time varying equation of state, brane-world and generalized Chaplygin gas model and test them on Riess' Gold sample. As a result we obtain that the best model - in terms of Akaike Criterion - is the quintessence model. The odds suggest that although there exist differences in the support given to specific scenario...

  19. Towards a universal competitive intelligence process model

    Directory of Open Access Journals (Sweden)

    Rene Pellissier

    2013-07-01

    Full Text Available Background: Competitive intelligence (CI provides actionable intelligence, which provides a competitive edge in enterprises. However, without proper process, it is difficult to develop actionable intelligence. There are disagreements about how the CI process should be structured. For CI professionals to focus on producing actionable intelligence, and to do so with simplicity, they need a common CI process model.Objectives: The purpose of this research is to review the current literature on CI, to look at the aims of identifying and analysing CI process models, and finally to propose a universal CI process model.Method: The study was qualitative in nature and content analysis was conducted on all identified sources establishing and analysing CI process models. To identify relevant literature, academic databases and search engines were used. Moreover, a review of references in related studies led to more relevant sources, the references of which were further reviewed and analysed. To ensure reliability, only peer-reviewed articles were used.Results: The findings reveal that the majority of scholars view the CI process as a cycle of interrelated phases. The output of one phase is the input of the next phase.Conclusion: The CI process is a cycle of interrelated phases. The output of one phase is the input of the next phase. These phases are influenced by the following factors: decision makers, process and structure, organisational awareness and culture, and feedback.

  20. Towards a universal competitive intelligence process model

    Directory of Open Access Journals (Sweden)

    Rene Pellissier

    2013-08-01

    Full Text Available Background: Competitive intelligence (CI provides actionable intelligence, which provides a competitive edge in enterprises. However, without proper process, it is difficult to develop actionable intelligence. There are disagreements about how the CI process should be structured. For CI professionals to focus on producing actionable intelligence, and to do so with simplicity, they need a common CI process model.Objectives: The purpose of this research is to review the current literature on CI, to look at the aims of identifying and analysing CI process models, and finally to propose a universal CI process model.Method: The study was qualitative in nature and content analysis was conducted on all identified sources establishing and analysing CI process models. To identify relevant literature, academic databases and search engines were used. Moreover, a review of references in related studies led to more relevant sources, the references of which were further reviewed and analysed. To ensure reliability, only peer-reviewed articles were used.Results: The findings reveal that the majority of scholars view the CI process as a cycle of interrelated phases. The output of one phase is the input of the next phase.Conclusion: The CI process is a cycle of interrelated phases. The output of one phase is the input of the next phase. These phases are influenced by the following factors: decision makers, process and structure, organisational awareness and culture, and feedback.