WorldWideScience

Sample records for model iu algorithms

  1. Cloud Model Bat Algorithm

    OpenAIRE

    Yongquan Zhou; Jian Xie; Liangliang Li; Mingzhi Ma

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformati...

  2. Cloud model bat algorithm.

    Science.gov (United States)

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.

  3. Cloud Model Bat Algorithm

    Directory of Open Access Journals (Sweden)

    Yongquan Zhou

    2014-01-01

    Full Text Available Bat algorithm (BA is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: “bats approach their prey.” Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.

  4. Multiagent scheduling models and algorithms

    CERN Document Server

    Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur

    2014-01-01

    This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.

  5. Parallel Algorithms for Model Checking

    NARCIS (Netherlands)

    van de Pol, Jaco; Mousavi, Mohammad Reza; Sgall, Jiri

    2017-01-01

    Model checking is an automated verification procedure, which checks that a model of a system satisfies certain properties. These properties are typically expressed in some temporal logic, like LTL and CTL. Algorithms for LTL model checking (linear time logic) are based on automata theory and graph

  6. Algorithmic Issues in Modeling Motion

    DEFF Research Database (Denmark)

    Agarwal, P. K; Guibas, L. J; Edelsbrunner, H.

    2003-01-01

    This article is a survey of research areas in which motion plays a pivotal role. The aim of the article is to review current approaches to modeling motion together with related data structures and algorithms, and to summarize the challenges that lie ahead in producing a more unified theory...

  7. ICRH studies in TJ-IU torsatron

    International Nuclear Information System (INIS)

    Castejon, F.; Longinov, A.V.; Rodriguez R, L.

    1993-01-01

    Preliminary studies for Ion Cyclotron Resonance Heating (ICRH) in the frequency range f=3-150 MHz are presented for TJ-IU torsatron. This wide range implies the use of two different theoretical models. The first valid for high frequency, where the WKB approximation is applicable, and the second one which solves the full wave equation in one dimension. The high frequency calculations have been made using a ray tracing code and taking into account the magnetic field and plasma 3-D inhomogeneity. The results obtained in this case are presented in the first paper of this report, being the most important the criterion to avoid Fast Wave (fw)-slow wave (SW) coupling at Lower Hybrid Resonance, near the plasma edge, and the existence of so called Localized Modes. for the low frequency range wave-length is of the size of the plasma radius, therefore, the WKB approximation cannot be used. In this case a 1-D model is used which disregards toroidal effects, to study the main available heating scenarios which are presented in the second work of this report. the studies are made for hydrogen, deuterium and mixed plasmas with and without He 3 minority. Finally, the antenna designs to reach these several scenarios are presented in the third paper. Two different antenna models are provided for SW excitation, one of the current type and the other one of potential type. A third antenna is designed to excite FW which is similar to the current type antenna for SW, but rotated 90 degree Celsius

  8. ICRH studies in TJ-IU torsatron

    International Nuclear Information System (INIS)

    Castejon, F.

    1993-01-01

    Preliminary studies for ion Cyclotron Resonance Heating (ICRH) in the frequency range f=3-150 MHz are presented for TJ-IU torsatron. This wide range implies the use of two different theoretical models. The first valid for high frequency, where the WKB approximation is applicable, and the second one which solves the full wave equation in one dimension. The high frequency calculations have been made using a ray tracing code and taking into account the magnetic field and plasma 3-D inhomogeneity. The results obtained in this case are presented in the first paper of this report, being the most important the criterion to avoid Fast Wave (FW)-Slow Wave (SW) coupling at Lower Hybrid Resonance, near the plasma edge, and the existence of so called Localized Modes. For the low frequency range wave-length is of the size of the plasma radius, there fore, the WKB approximation cannot be used. In this case a 1-D model is used which disregards toroidal effects, to study the main available heating scenarios which are presented in the second work of this report. The studies are made for hydrogen, deuterium and mixed plasmas with and without He3 majority. Finally, the antenna designs to reach these several scenarios are presented in the third paper. Two different antenna models are provided for SW excitation, one of the current type and the other one of potential type. A third antenna is designed to excite FW which is similar to the current type antenna for SW, but rotated 90 degree centigree. (Author)11 refs

  9. ICRH studies in TJ-IU torsatron

    Energy Technology Data Exchange (ETDEWEB)

    Castejon, F.

    1993-07-01

    Preliminary studies for ion Cyclotron Resonance Heating (ICRH) in the frequency range f=3-150 MHz are presented for TJ-IU torsatron. This wide range implies the use of two different theoretical models. The first valid for high frequency, where the WKB approximation is applicable, and the second one which solves the full wave equation in one dimension. The high frequency calculations have been made using a ray tracing code and taking into account the magnetic field and plasma 3-D inhomogeneity. The results obtained in this case are presented in the first paper of this report, being the most important the criterion to avoid Fast Wave (FW)-Slow Wave (SW) coupling at Lower Hybrid Resonance, near the plasma edge, and the existence of so called Localized Modes. For the low frequency range wave-length is of the size of the plasma radius, there fore, the WKB approximation cannot be used. In this case a 1-D model is used which disregards toroidal effects, to study the main available heating scenarios which are presented in the second work of this report. The studies are made for hydrogen, deuterium and mixed plasmas with and without He3 majority. Finally, the antenna designs to reach these several scenarios are presented in the third paper. Two different antenna models are provided for SW excitation, one of the current type and the other one of potential type. A third antenna is designed to excite FW which is similar to the current type antenna for SW, but rotated 90 degree centigree. (Author)11 refs.

  10. Relate@IU>>>Share@IU: A New and Different Computer-Based Communications Paradigm.

    Science.gov (United States)

    Frick, Theodore W.; Roberto, Joseph; Korkmaz, Ali; Oh, Jeong-En; Twal, Riad

    The purpose of this study was to examine problems with the current computer-based electronic communication systems and to initially test and revise a new and different paradigm for e-collaboration, Relate@IU. Understanding the concept of sending links to resources, rather than sending the resource itself, is at the core of how Relate@IU differs…

  11. Complex fluids modeling and algorithms

    CERN Document Server

    Saramito, Pierre

    2016-01-01

    This book presents a comprehensive overview of the modeling of complex fluids, including many common substances, such as toothpaste, hair gel, mayonnaise, liquid foam, cement and blood, which cannot be described by Navier-Stokes equations. It also offers an up-to-date mathematical and numerical analysis of the corresponding equations, as well as several practical numerical algorithms and software solutions for the approximation of the solutions. It discusses industrial (molten plastics, forming process), geophysical (mud flows, volcanic lava, glaciers and snow avalanches), and biological (blood flows, tissues) modeling applications. This book is a valuable resource for undergraduate students and researchers in applied mathematics, mechanical engineering and physics.

  12. Information Dynamics in Networks: Models and Algorithms

    Science.gov (United States)

    2016-09-13

    Information Dynamics in Networks: Models and Algorithms In this project, we investigated how network structure interplays with higher level processes in...Models and Algorithms Report Title In this project, we investigated how network structure interplays with higher level processes in online social...Received Paper 1.00 2.00 3.00 . A Note on Modeling Retweet Cascades on Twitter, Workshop on Algorithms and Models for the Web Graph. 09-DEC-15

  13. Feedback model predictive control by randomized algorithms

    NARCIS (Netherlands)

    Batina, Ivo; Stoorvogel, Antonie Arij; Weiland, Siep

    2001-01-01

    In this paper we present a further development of an algorithm for stochastic disturbance rejection in model predictive control with input constraints based on randomized algorithms. The algorithm presented in our work can solve the problem of stochastic disturbance rejection approximately but with

  14. A Robustly Stabilizing Model Predictive Control Algorithm

    Science.gov (United States)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  15. Modeling and Engineering Algorithms for Mobile Data

    DEFF Research Database (Denmark)

    Blunck, Henrik; Hinrichs, Klaus; Sondern, Joëlle

    2006-01-01

    In this paper, we present an object-oriented approach to modeling mobile data and algorithms operating on such data. Our model is general enough to capture any kind of continuous motion while at the same time allowing for encompassing algorithms optimized for specific types of motion. Such motion...

  16. Algorithms to solve the Sutherland model

    OpenAIRE

    Langmann, Edwin

    2001-01-01

    We give a self-contained presentation and comparison of two different algorithms to explicitly solve quantum many body models of indistinguishable particles moving on a circle and interacting with two-body potentials of $1/\\sin^2$-type. The first algorithm is due to Sutherland and well-known; the second one is a limiting case of a novel algorithm to solve the elliptic generalization of the Sutherland model. These two algorithms are different in several details. We show that they are equivalen...

  17. LCD motion blur: modeling, analysis, and algorithm.

    Science.gov (United States)

    Chan, Stanley H; Nguyen, Truong Q

    2011-08-01

    Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms.

  18. Model Checking Algorithms for CTMDPs

    DEFF Research Database (Denmark)

    Buchholz, Peter; Hahn, Ernst Moritz; Hermanns, Holger

    2011-01-01

    Continuous Stochastic Logic (CSL) can be interpreted over continuoustime Markov decision processes (CTMDPs) to specify quantitative properties of stochastic systems that allow some external control. Model checking CSL formulae over CTMDPs requires then the computation of optimal control strategie...

  19. Rethinking exchange market models as optimization algorithms

    Science.gov (United States)

    Luquini, Evandro; Omar, Nizam

    2018-02-01

    The exchange market model has mainly been used to study the inequality problem. Although the human society inequality problem is very important, the exchange market models dynamics until stationary state and its capability of ranking individuals is interesting in itself. This study considers the hypothesis that the exchange market model could be understood as an optimization procedure. We present herein the implications for algorithmic optimization and also the possibility of a new family of exchange market models

  20. Fuzzy audit risk modeling algorithm

    Directory of Open Access Journals (Sweden)

    Zohreh Hajihaa

    2011-07-01

    Full Text Available Fuzzy logic has created suitable mathematics for making decisions in uncertain environments including professional judgments. One of the situations is to assess auditee risks. During recent years, risk based audit (RBA has been regarded as one of the main tools to fight against fraud. The main issue in RBA is to determine the overall audit risk an auditor accepts, which impact the efficiency of an audit. The primary objective of this research is to redesign the audit risk model (ARM proposed by auditing standards. The proposed model of this paper uses fuzzy inference systems (FIS based on the judgments of audit experts. The implementation of proposed fuzzy technique uses triangular fuzzy numbers to express the inputs and Mamdani method along with center of gravity are incorporated for defuzzification. The proposed model uses three FISs for audit, inherent and control risks, and there are five levels of linguistic variables for outputs. FISs include 25, 25 and 81 rules of if-then respectively and officials of Iranian audit experts confirm all the rules.

  1. Model Checking Algorithms for Markov Reward Models

    NARCIS (Netherlands)

    Cloth, Lucia; Cloth, L.

    2006-01-01

    Model checking Markov reward models unites two different approaches of model-based system validation. On the one hand, Markov reward models have a long tradition in model-based performance and dependability evaluation. On the other hand, a formal method like model checking allows for the precise

  2. Worm algorithm for the CPN−1 model

    Directory of Open Access Journals (Sweden)

    Tobias Rindlisbacher

    2017-05-01

    Full Text Available The CPN−1 model in 2D is an interesting toy model for 4D QCD as it possesses confinement, asymptotic freedom and a non-trivial vacuum structure. Due to the lower dimensionality and the absence of fermions, the computational cost for simulating 2D CPN−1 on the lattice is much lower than that for simulating 4D QCD. However, to our knowledge, no efficient algorithm for simulating the lattice CPN−1 model for N>2 has been tested so far, which also works at finite density. To this end we propose a new type of worm algorithm which is appropriate to simulate the lattice CPN−1 model in a dual, flux-variables based representation, in which the introduction of a chemical potential does not give rise to any complications. In addition to the usual worm moves where a defect is just moved from one lattice site to the next, our algorithm additionally allows for worm-type moves in the internal variable space of single links, which accelerates the Monte Carlo evolution. We use our algorithm to compare the two popular CPN−1 lattice actions and exhibit marked differences in their approach to the continuum limit.

  3. Algorithms and Models for the Web Graph

    NARCIS (Netherlands)

    Gleich, David F.; Komjathy, Julia; Litvak, Nelli

    2015-01-01

    This volume contains the papers presented at WAW2015, the 12th Workshop on Algorithms and Models for the Web-Graph held during December 10–11, 2015, in Eindhoven. There were 24 submissions. Each submission was reviewed by at least one, and on average two, Program Committee members. The committee

  4. Model based development of engine control algorithms

    NARCIS (Netherlands)

    Dekker, H.J.; Sturm, W.L.

    1996-01-01

    Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed

  5. Optimization in engineering models and algorithms

    CERN Document Server

    Sioshansi, Ramteen

    2017-01-01

    This textbook covers the fundamentals of optimization, including linear, mixed-integer linear, nonlinear, and dynamic optimization techniques, with a clear engineering focus. It carefully describes classical optimization models and algorithms using an engineering problem-solving perspective, and emphasizes modeling issues using many real-world examples related to a variety of application areas. Providing an appropriate blend of practical applications and optimization theory makes the text useful to both practitioners and students, and gives the reader a good sense of the power of optimization and the potential difficulties in applying optimization to modeling real-world systems. The book is intended for undergraduate and graduate-level teaching in industrial engineering and other engineering specialties. It is also of use to industry practitioners, due to the inclusion of real-world applications, opening the door to advanced courses on both modeling and algorithm development within the industrial engineering ...

  6. Modeling of Nonlinear Systems using Genetic Algorithm

    Science.gov (United States)

    Hayashi, Kayoko; Yamamoto, Toru; Kawada, Kazuo

    In this paper, a newly modeling system by using Genetic Algorithm (GA) is proposed. The GA is an evolutionary computational method that simulates the mechanisms of heredity or evolution of living things, and it is utilized in optimization and in searching for optimized solutions. Most process systems have nonlinearities, so it is necessary to anticipate exactly such systems. However, it is difficult to make a suitable model for nonlinear systems, because most nonlinear systems have a complex structure. Therefore the newly proposed method of modeling for nonlinear systems uses GA. Then, according to the newly proposed scheme, the optimal structure and parameters of the nonlinear model are automatically generated.

  7. Markov chains models, algorithms and applications

    CERN Document Server

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  8. Absorption kinetics of two highly concentrated preparations of growth hormone: 12 IU/ml compared to 56 IU/ml

    DEFF Research Database (Denmark)

    Laursen, Torben; Susgaard, Søren; Jensen, Flemming Steen

    1994-01-01

    AbstractSend to: Pharmacol Toxicol. 1994 Jan;74(1):54-7. Absorption kinetics of two highly concentrated preparations of growth hormone: 12 IU/ml compared to 56 IU/ml. Laursen T1, Susgaard S, Jensen FS, Jørgensen JO, Christiansen JS. Author information Abstract The purpose of this study...... was to compare the relative bioavailability of two highly concentrated (12 IU/ml versus 56 IU/ml) formulations of biosynthetic human growth hormone administered subcutaneously. After pretreatment with growth hormone for at least four weeks, nine growth hormone deficient patients with a mean age of 26.2 years...... (range 17-43) were studied two times in a randomized design, the two studies being separated by at least one week. At the start of each study period (7 p.m.), growth hormone was injected subcutaneously in a dosage of 3 IU/m2. The 12 IU/ml preparation of growth hormone was administered on one occasion...

  9. Modelling Evolutionary Algorithms with Stochastic Differential Equations.

    Science.gov (United States)

    Heredia, Jorge Pérez

    2017-11-20

    There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly facilitated due to the existence of well established methods. However, this typically comes at the cost of disregarding information about the process. Here, we introduce the use of stochastic differential equations (SDEs) for the study of EAs. SDEs can produce simple analytical results for the dynamics of stochastic processes, unlike Markov chains which can produce rigorous but unwieldy expressions about the dynamics. On the other hand, unlike ordinary differential equations (ODEs), they do not discard information about the stochasticity of the process. We show that these are especially suitable for the analysis of fixed budget scenarios and present analogues of the additive and multiplicative drift theorems from runtime analysis. In addition, we derive a new more general multiplicative drift theorem that also covers non-elitist EAs. This theorem simultaneously allows for positive and negative results, providing information on the algorithm's progress even when the problem cannot be optimised efficiently. Finally, we provide results for some well-known heuristics namely Random Walk (RW), Random Local Search (RLS), the (1+1) EA, the Metropolis Algorithm (MA), and the Strong Selection Weak Mutation (SSWM) algorithm.

  10. Sparse modeling theory, algorithms, and applications

    CERN Document Server

    Rish, Irina

    2014-01-01

    ""A comprehensive, clear, and well-articulated book on sparse modeling. This book will stand as a prime reference to the research community for many years to come.""-Ricardo Vilalta, Department of Computer Science, University of Houston""This book provides a modern introduction to sparse methods for machine learning and signal processing, with a comprehensive treatment of both theory and algorithms. Sparse Modeling is an ideal book for a first-year graduate course.""-Francis Bach, INRIA - École Normale Supřieure, Paris

  11. Link mining models, algorithms, and applications

    CERN Document Server

    Yu, Philip S; Faloutsos, Christos

    2010-01-01

    This book presents in-depth surveys and systematic discussions on models, algorithms and applications for link mining. Link mining is an important field of data mining. Traditional data mining focuses on 'flat' data in which each data object is represented as a fixed-length attribute vector. However, many real-world data sets are much richer in structure, involving objects of multiple types that are related to each other. Hence, recently link mining has become an emerging field of data mining, which has a high impact in various important applications such as text mining, social network analysi

  12. Genetic Algorithms Principles Towards Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Nabil M. Hewahi

    2011-10-01

    Full Text Available In this paper we propose a general approach based on Genetic Algorithms (GAs to evolve Hidden Markov Models (HMM. The problem appears when experts assign probability values for HMM, they use only some limited inputs. The assigned probability values might not be accurate to serve in other cases related to the same domain. We introduce an approach based on GAs to find
    out the suitable probability values for the HMM to be mostly correct in more cases than what have been used to assign the probability values.

  13. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS

    Directory of Open Access Journals (Sweden)

    V. E. Marley

    2015-01-01

    Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.

  14. Models and Algorithms for Tracking Target with Coordinated Turn Motion

    Directory of Open Access Journals (Sweden)

    Xianghui Yuan

    2014-01-01

    Full Text Available Tracking target with coordinated turn (CT motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT model with known turn rate, augmented coordinated turn (ACT model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM framework, the algorithm based on expectation maximization (EM algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM algorithm, the EM algorithm shows its effectiveness.

  15. Ischemic postconditioning: experimental models and protocol algorithms.

    Science.gov (United States)

    Skyschally, Andreas; van Caster, Patrick; Iliodromitis, Efstathios K; Schulz, Rainer; Kremastinos, Dimitrios T; Heusch, Gerd

    2009-09-01

    Ischemic postconditioning, a simple mechanical maneuver at the onset of reperfusion, reduces infarct size after ischemia/reperfusion. After its first description in 2003 by Zhao et al. numerous experimental studies have investigated this protective phenomenon. Whereas the underlying mechanisms and signal transduction are not yet understood in detail, infarct size reduction by ischemic postconditioning was confirmed in all species tested so far, including man. We have now reviewed the literature with focus on experimental models and protocols to better understand the determinants of protection by ischemic postconditioning or lack of it. Only studies with infarct size as unequivocal endpoint were considered. In all species and models, the duration of index ischemia and the protective protocol algorithm impact on the outcome of ischemic postconditioning, and gender, age, and myocardial temperature contribute.

  16. Warehouse Optimization Model Based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Guofeng Qin

    2013-01-01

    Full Text Available This paper takes Bao Steel logistics automated warehouse system as an example. The premise is to maintain the focus of the shelf below half of the height of the shelf. As a result, the cost time of getting or putting goods on the shelf is reduced, and the distance of the same kind of goods is also reduced. Construct a multiobjective optimization model, using genetic algorithm to optimize problem. At last, we get a local optimal solution. Before optimization, the average cost time of getting or putting goods is 4.52996 s, and the average distance of the same kinds of goods is 2.35318 m. After optimization, the average cost time is 4.28859 s, and the average distance is 1.97366 m. After analysis, we can draw the conclusion that this model can improve the efficiency of cargo storage.

  17. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  18. Adaptive numerical algorithms in space weather modeling

    Science.gov (United States)

    Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2012-02-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

  19. Adaptive numerical algorithms in space weather modeling

    International Nuclear Information System (INIS)

    Tóth, Gábor; Holst, Bart van der; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2012-01-01

    Space weather describes the various processes in the Sun–Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

  20. Adaptive Numerical Algorithms in Space Weather Modeling

    Science.gov (United States)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; hide

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  1. Engineering of Algorithms for Hidden Markov models and Tree Distances

    DEFF Research Database (Denmark)

    Sand, Andreas

    speed up all the classical algorithms for analyses and training of hidden Markov models. And I show how two particularly important algorithms, the forward algorithm and the Viterbi algorithm, can be accelerated through a reformulation of the algorithms and a somewhat more complicated parallelization....... Lastly, I show how hidden Markov models can be trained orders of magnitude faster on a given input by rethinking the forward algorithm such that it can automatically adapt itself to the input. Together, these optimization have enabled us to perform analysis of full genomes in a few minutes and thereby...

  2. A genetic algorithm for solving supply chain network design model

    Science.gov (United States)

    Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A.

    2013-09-01

    Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance.

  3. Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling

    Science.gov (United States)

    Lohn, Jason; Colombano, Silvano

    1997-01-01

    We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.

  4. Efficient Parallel Algorithms for Landscape Evolution Modelling

    Science.gov (United States)

    Moresi, L. N.; Mather, B.; Beucher, R.

    2017-12-01

    Landscape erosion and the deposition of sediments by river systems are strongly controlled bytopography, rainfall patterns, and the susceptibility of the basement to the action ofrunning water. It is well understood that each of these processes depends on the other, for example:topography results from active tectonic processes; deformation, metamorphosis andexhumation alter the competence of the basement; rainfall patterns depend on topography;uplift and subsidence in response to tectonic stress can be amplified by erosionand sediment deposition. We typically gain understanding of such coupled systems through forward models which capture theessential interactions of the various components and attempt parameterise those parts of the individual systemthat are unresolvable at the scale of the interaction. Here we address the problem of predicting erosion and deposition rates at a continental scalewith a resolution of tens to hundreds of metres in a dynamic, Lagrangian framework. This isa typical requirement for a code to interface with a mantle / lithosphere dynamics model anddemands an efficient, unstructured, parallel implementation. We address this through a very general algorithm that treats all parts of the landscape evolution equationsin sparse-matrix form including those for stream-flow accumulation, dam-filling and catchment determination. This givesus considerable flexibility in developing unstructured, parallel code, and in creating a modular packagethat can be configured by users to work at different temporal and spatial scales, but is also has potential advantagesin treating the non-linear parts of the problem in a general manner.

  5. Model order reduction using eigen algorithm

    African Journals Online (AJOL)

    DR OKE

    to use either for design or analysis. Hence, it is ... directly from the Eigen algorithm while the zeros are determined through factor division algorithm to obtain the reduced order system. ..... V. Singh, Chandra and H. Kar, “Improved Routh Pade approximationss: A computer aided approach”, IEEE Transaction on. Automat ...

  6. Algorithm Development for the Two-Fluid Plasma Model

    National Research Council Canada - National Science Library

    Shumlak, Uri

    2002-01-01

    A preliminary algorithm based on the two-fluid plasma model is developed to investigate the possibility of simulating plasmas with a more physically accurate model than the MHD (magnetohydrodynamic) model...

  7. CAMAC Software for TJ-I and TJ-IU

    International Nuclear Information System (INIS)

    Milligen, B. Ph. van

    1994-01-01

    A user-friendly software package for control of CAMAC data acquisition modules for the TJ-I and TJ-IU experiments at the Asociacion CIEMAT para Fusion has been developed. The CAMAC control software operates in synchronisation with the pre-existing VME-based data acquisition system. The control software controls the setup of the CAMAC modules and manages the data flow from the lacking to the storage of data. Data file management is performed largely automatically. Further, user software is provided for viewing and analysing the data. (Author) 9 refs

  8. Camac Software for TJ-I and TJ-IU

    International Nuclear Information System (INIS)

    Milligen, B. Ph. van.

    1994-01-01

    A user-friendly software package for control of CAMAC data acquisition modules for the TJ-I and TJ-IU experiments at the Association CIEMAT para Fusion has been developed. The CAMAC control software operates in Synchronization with the pre-existing VME-based data-acquisition system. The control software controls the setup of the CAMAC modules and manages the data flow from the taking to the storage of data. Data file management is performed largely automatically. Further, user software is provided for viewing and analysing the data

  9. Efficient Implementation Algorithms for Homogenized Energy Models

    National Research Council Canada - National Science Library

    Braun, Thomas R; Smith, Ralph C

    2005-01-01

    ... for real-time control implementation. In this paper, we develop algorithms employing lookup tables which permit the high speed implementation of formulations which incorporate relaxation mechanisms and electromechanical coupling...

  10. Loop algorithms for quantum simulations of fermion models on lattices

    International Nuclear Information System (INIS)

    Kawashima, N.; Gubernatis, J.E.; Evertz, H.G.

    1994-01-01

    Two cluster algorithms, based on constructing and flipping loops, are presented for world-line quantum Monte Carlo simulations of fermions and are tested on the one-dimensional repulsive Hubbard model. We call these algorithms the loop-flip and loop-exchange algorithms. For these two algorithms and the standard world-line algorithm, we calculated the autocorrelation times for various physical quantities and found that the ordinary world-line algorithm, which uses only local moves, suffers from very long correlation times that makes not only the estimate of the error difficult but also the estimate of the average values themselves difficult. These difficulties are especially severe in the low-temperature, large-U regime. In contrast, we find that new algorithms, when used alone or in combinations with themselves and the standard algorithm, can have significantly smaller autocorrelation times, in some cases being smaller by three orders of magnitude. The new algorithms, which use nonlocal moves, are discussed from the point of view of a general prescription for developing cluster algorithms. The loop-flip algorithm is also shown to be ergodic and to belong to the grand canonical ensemble. Extensions to other models and higher dimensions are briefly discussed

  11. Fireworks algorithm for mean-VaR/CVaR models

    Science.gov (United States)

    Zhang, Tingting; Liu, Zhifeng

    2017-10-01

    Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field.

  12. Methodology and basic algorithms of the Livermore Economic Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    Bell, R.B.

    1981-03-17

    The methodology and the basic pricing algorithms used in the Livermore Economic Modeling System (EMS) are described. The report explains the derivations of the EMS equations in detail; however, it could also serve as a general introduction to the modeling system. A brief but comprehensive explanation of what EMS is and does, and how it does it is presented. The second part examines the basic pricing algorithms currently implemented in EMS. Each algorithm's function is analyzed and a detailed derivation of the actual mathematical expressions used to implement the algorithm is presented. EMS is an evolving modeling system; improvements in existing algorithms are constantly under development and new submodels are being introduced. A snapshot of the standard version of EMS is provided and areas currently under study and development are considered briefly.

  13. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss...

  14. Evaluation of models generated via hybrid evolutionary algorithms ...

    African Journals Online (AJOL)

    2016-04-02

    Apr 2, 2016 ... Evaluation of models generated via hybrid evolutionary algorithms for the prediction of Microcystis ... evolutionary algorithms (HEA) proved to be highly applica- ble to the hypertrophic reservoirs of South Africa. .... discovered and optimised using a large-scale parallel computational device and relevant soft-.

  15. Models and algorithms for biomolecules and molecular networks

    CERN Document Server

    DasGupta, Bhaskar

    2016-01-01

    By providing expositions to modeling principles, theories, computational solutions, and open problems, this reference presents a full scope on relevant biological phenomena, modeling frameworks, technical challenges, and algorithms. * Up-to-date developments of structures of biomolecules, systems biology, advanced models, and algorithms * Sampling techniques for estimating evolutionary rates and generating molecular structures * Accurate computation of probability landscape of stochastic networks, solving discrete chemical master equations * End-of-chapter exercises

  16. Algorithms

    Indian Academy of Sciences (India)

    have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming language Is called a program. From activities 1-3, we can observe that: • Each activity is a command.

  17. Insertion algorithms for network model database management systems

    Science.gov (United States)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  18. An Automatic Registration Algorithm for 3D Maxillofacial Model

    Science.gov (United States)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  19. Algorithmic detectability threshold of the stochastic block model

    Science.gov (United States)

    Kawamoto, Tatsuro

    2018-03-01

    The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.

  20. Survey of chemically amplified resist models and simulator algorithms

    Science.gov (United States)

    Croffie, Ebo H.; Yuan, Lei; Cheng, Mosong; Neureuther, Andrew R.

    2001-08-01

    Modeling has become indespensable tool for chemically amplified resist (CAR) evaluations. It has been used extensively to study acid diffusion and its effects on resist image formation. Several commercial and academic simulators have been developed for CAR process simulation. For commercial simulators such as PROLITH (Finle Technologies) and Solid-C (Sigma-C), the user is allowed to choose between an empirical model or a concentration dependant diffusion model. The empirical model is faster but not very accurate for 2-dimension resist simulations. In this case there is a trade off between the speed of the simulator and the accuracy of the results. An academic simulator such as STORM (U.C. Berkeley) gives the user a choice of different algorithms including Fast Imaging 2nd order finite difference algorithm and Moving Boundary finite element algorithm. A user interested in simulating the volume shrinkage and polymer stress effects during post exposure bake will need the Moving Boundary algorithm whereas a user interested in the latent image formation without polymer deformations will find the Fast Imaging algorithm more appropriate. The Fast Imaging algorithm is generally faster and requires less computer memory. This choice of algorithm presents a trade off between speed and level of detail in resist profile prediction. This paper surveys the different models and simulator algorithms available in the literature. Contributions in the field of CAR modeling including contributions to characterization of CAR exposure and post exposure bake (PEB) processes for different resist systems. Several numerical algorithms and their performances will also be discussed in this paper.

  1. A Developed Artificial Bee Colony Algorithm Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Ye Jin

    2018-04-01

    Full Text Available The Artificial Bee Colony (ABC algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T ˜ that is presented by nature language and its quantitative expression, which integrates probability theory and the fuzzy mathematics. A developed ABC algorithm based on cloud model is proposed to enhance accuracy of the basic ABC algorithm and avoid getting trapped into local optima by introducing a new select mechanism, replacing the onlooker bees’ search formula and changing the scout bees’ updating formula. Experiments on CEC15 show that the new algorithm has a faster convergence speed and higher accuracy than the basic ABC and some cloud model based ABC variants.

  2. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well...... for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...

  3. Testing algorithms for a passenger train braking performance model.

    Science.gov (United States)

    2011-09-01

    "The Federal Railroad Administrations Office of Research and Development funded a project to establish performance model to develop, analyze, and test positive train control (PTC) braking algorithms for passenger train operations. With a good brak...

  4. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  5. Algorithms

    Indian Academy of Sciences (India)

    algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language ... ·1 x:=sln(theta) x : = sm(theta) 1. ~. Idl d.t Read A.B,C. ~ lei ~ Print x.y.z. L;;;J. Figure 2 Symbols used In flowchart language to rep- resent Assignment, Read.

  6. Algorithms

    Indian Academy of Sciences (India)

    In the previous articles, we have discussed various common data-structures such as arrays, lists, queues and trees and illustrated the widely used algorithm design paradigm referred to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted ...

  7. DiamondTorre Algorithm for High-Performance Wave Modeling

    Directory of Open Access Journals (Sweden)

    Vadim Levchenko

    2016-08-01

    Full Text Available Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.

  8. Drexel University Shell Model (DUSM) algorithm

    Science.gov (United States)

    Valliéres, Michel; Novoselsky, Akiva

    1994-03-01

    This lecture is devoted to the Drexel University Shell Model (DUSM) code; this is a new shell-model code based on a separation of the various subspaces in which the single particle wavefunctions are defined. This is achieved via extensive use of permutation group concepts and a redefinition of the Coeficients of Fractional Parentage (CFP) to include permutation labels. This leads to a modern and efficient approach to nuclear shell-model.

  9. Drexel University Shell Model (DUSM) algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Vallieres, M. (Drexel Univ., Philadelphia, PA (United States). Dept. of Physics and Atmospheric Science); Novoselsky, A. (Hebrew Univ., Jerusalem (Israel). Dept. of Physics)

    1994-03-28

    This lecture is devoted to the Drexel University Shell Model (DUSM) code; this is a new shell-model code based on a separation of the various subspaces in which the single particle wavefunctions are defined. This is achieved via extensive use of permutation group concepts and a redefinition of the Coeficients of Fractional Parentage (CEP) to include permutation labels. This leads to a modern and efficient approach to nuclear shell-model. (orig.)

  10. Improved CHAID algorithm for document structure modelling

    Science.gov (United States)

    Belaïd, A.; Moinel, T.; Rangoni, Y.

    2010-01-01

    This paper proposes a technique for the logical labelling of document images. It makes use of a decision-tree based approach to learn and then recognise the logical elements of a page. A state-of-the-art OCR gives the physical features needed by the system. Each block of text is extracted during the layout analysis and raw physical features are collected and stored in the ALTO format. The data-mining method employed here is the "Improved CHi-squared Automatic Interaction Detection" (I-CHAID). The contribution of this work is the insertion of logical rules extracted from the logical layout knowledge to support the decision tree. Two setups have been tested; the first uses one tree per logical element, the second one uses a single tree for all the logical elements we want to recognise. The main system, implemented in Java, coordinates the third-party tools (Omnipage for the OCR part, and SIPINA for the I-CHAID algorithm) using XML and XSL transforms. It was tested on around 1000 documents belonging to the ICPR'04 and ICPR'08 conference proceedings, representing about 16,000 blocks. The final error rate for determining the logical labels (among 9 different ones) is less than 6%.

  11. Immune System Model Calibration by Genetic Algorithm

    NARCIS (Netherlands)

    Presbitero, A.; Krzhizhanovskaya, V.; Mancini, E.; Brands, R.; Sloot, P.

    2016-01-01

    We aim to develop a mathematical model of the human immune system for advanced individualized healthcare where medication plan is fine-tuned to fit a patient's conditions through monitored biochemical processes. One of the challenges is calibrating model parameters to satisfy existing experimental

  12. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation

  13. Stochastic cluster algorithms for discrete Gaussian (SOS) models

    International Nuclear Information System (INIS)

    Evertz, H.G.; Hamburg Univ.; Hasenbusch, M.; Marcu, M.; Tel Aviv Univ.; Pinn, K.; Muenster Univ.; Solomon, S.

    1990-10-01

    We present new Monte Carlo cluster algorithms which eliminate critical slowing down in the simulation of solid-on-solid models. In this letter we focus on the two-dimensional discrete Gaussian model. The algorithms are based on reflecting the integer valued spin variables with respect to appropriately chosen reflection planes. The proper choice of the reflection plane turns out to be crucial in order to obtain a small dynamical exponent z. Actually, the successful versions of our algorithm are a mixture of two different procedures for choosing the reflection plane, one of them ergodic but slow, the other one non-ergodic and also slow when combined with a Metropolis algorithm. (orig.)

  14. Applications of Flocking Algorithms to Input Modeling for Agent Movement

    Science.gov (United States)

    2011-12-01

    2445 Singham, Therkildsen, and Schruben We apply the following flocking algorithm to this leading boid to generate followers, who will then be mapped...due to the paths crossing. 2447 Singham, Therkildsen, and Schruben Figure 2: Plot of the path of a boid generated by the Group 4 flocking algorithm ...on the possible inputs. This method uses techniques from agent-based modeling to generate a flock of boids that follow the data. In this paper, we

  15. An Algorithm for Optimally Fitting a Wiener Model

    Directory of Open Access Journals (Sweden)

    Lucas P. Beverlin

    2011-01-01

    Full Text Available The purpose of this work is to present a new methodology for fitting Wiener networks to datasets with a large number of variables. Wiener networks have the ability to model a wide range of data types, and their structures can yield parameters with phenomenological meaning. There are several challenges to fitting such a model: model stiffness, the nonlinear nature of a Wiener network, possible overfitting, and the large number of parameters inherent with large input sets. This work describes a methodology to overcome these challenges by using several iterative algorithms under supervised learning and fitting subsets of the parameters at a time. This methodology is applied to Wiener networks that are used to predict blood glucose concentrations. The predictions of validation sets from models fit to four subjects using this methodology yielded a higher correlation between observed and predicted observations than other algorithms, including the Gauss-Newton and Levenberg-Marquardt algorithms.

  16. How to incorporate generic refraction models into multistatic tracking algorithms

    Science.gov (United States)

    Crouse, D. F.

    The vast majority of literature published on target tracking ignores the effects of atmospheric refraction. When refraction is considered, the solutions are generally tailored to a simple exponential atmospheric refraction model. This paper discusses how arbitrary refraction models can be incorporated into tracking algorithms. Attention is paid to multistatic tracking problems, where uncorrected refractive effects can worsen track accuracy and consistency in centralized tracking algorithms, and can lead to difficulties in track-to-track association in distributed tracking filters. Monostatic and bistatic track initialization using refraction-corrupted measurements is discussed. The results are demonstrated using an exponential refractive model, though an arbitrary refraction profile can be substituted.

  17. Co-clustering models, algorithms and applications

    CERN Document Server

    Govaert, Gérard

    2013-01-01

    Cluster or co-cluster analyses are important tools in a variety of scientific areas. The introduction of this book presents a state of the art of already well-established, as well as more recent methods of co-clustering. The authors mainly deal with the two-mode partitioning under different approaches, but pay particular attention to a probabilistic approach. Chapter 1 concerns clustering in general and the model-based clustering in particular. The authors briefly review the classical clustering methods and focus on the mixture model. They present and discuss the use of different mixture

  18. Economic Models and Algorithms for Distributed Systems

    CERN Document Server

    Neumann, Dirk; Altmann, Jorn; Rana, Omer F

    2009-01-01

    Distributed computing models for sharing resources such as Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. This book intends to discover fresh avenues of research and amendments to existing technologies, aiming at the successful deployment of commercial distributed systems

  19. Robust Return Algorithm for Anisotropic Plasticity Models

    DEFF Research Database (Denmark)

    Tidemann, L.; Krenk, Steen

    2017-01-01

    Plasticity models can be defined by an energy potential, a plastic flow potential and a yield surface. The energy potential defines the relation between the observable elastic strains ϒe and the energy conjugate stresses Τe and between the non-observable internal strains i and the energy conjugat...

  20. Data mining concepts models methods and algorithms

    CERN Document Server

    Kantardzic, Mehmed

    2011-01-01

    This book reviews state-of-the-art methodologies and techniques for analyzing enormous quantities of raw data in high-dimensional data spaces, to extract new information for decision making. The goal of this book is to provide a single introductory source, organized in a systematic way, in which we could direct the readers in analysis of large data sets, through the explanation of basic concepts, models and methodologies developed in recent decades.

  1. Algorithms for Optimal Model Distributions in Adaptive Switching Control Schemes

    Directory of Open Access Journals (Sweden)

    Debarghya Ghosh

    2016-03-01

    Full Text Available Several multiple model adaptive control architectures have been proposed in the literature. Despite many advances in theory, the crucial question of how to synthesize the pairs model/controller in a structurally optimal way is to a large extent not addressed. In particular, it is not clear how to place the pairs model/controller is such a way that the properties of the switching algorithm (e.g., number of switches, learning transient, final performance are optimal with respect to some criteria. In this work, we focus on the so-called multi-model unfalsified adaptive supervisory switching control (MUASSC scheme; we define a suitable structural optimality criterion and develop algorithms for synthesizing the pairs model/controller in such a way that they are optimal with respect to the structural optimality criterion we defined. The peculiarity of the proposed optimality criterion and algorithms is that the optimization is carried out so as to optimize the entire behavior of the adaptive algorithm, i.e., both the learning transient and the steady-state response. A comparison is made with respect to the model distribution of the robust multiple model adaptive control (RMMAC, where the optimization considers only the steady-state ideal response and neglects any learning transient.

  2. A tuning algorithm for model predictive controllers based on genetic algorithms and fuzzy decision making.

    Science.gov (United States)

    van der Lee, J H; Svrcek, W Y; Young, B R

    2008-01-01

    Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases. Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used. This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly useful for multi-input, multi-output (MIMO) cases where the definition of "optimum" control is subject to the opinion of the control engineer tuning the system. A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of "optimum" control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control, thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.

  3. Approach and plan for cleanup actions in the 100-IU-2 and 100-IU-6 Operable Units of the Hanford Site

    International Nuclear Information System (INIS)

    1996-10-01

    The purpose of this document is to summarize waste site information gathered to date relating to the 100-IU-2 and 100-IU-6 Operable Units (located at the Hanford Site in Richland, Washington), and to plan the extent of evaluation necessary to make cleanup decisions for identified waste sites under the Comprehensive Environmental Response, Compensation, and Liability Act of 1981. This is a streamlined approach to the decision-making process, reducing the time and costs for document preparation and review

  4. Modeling Algorithms in SystemC and ACL2

    Directory of Open Access Journals (Sweden)

    John W. O'Leary

    2014-06-01

    Full Text Available We describe the formal language MASC, based on a subset of SystemC and intended for modeling algorithms to be implemented in hardware. By means of a special-purpose parser, an algorithm coded in SystemC is converted to a MASC model for the purpose of documentation, which in turn is translated to ACL2 for formal verification. The parser also generates a SystemC variant that is suitable as input to a high-level synthesis tool. As an illustration of this methodology, we describe a proof of correctness of a simple 32-bit radix-4 multiplier.

  5. Methodology, models and algorithms in thermographic diagnostics

    CERN Document Server

    Živčák, Jozef; Madarász, Ladislav; Rudas, Imre J

    2013-01-01

    This book presents  the methodology and techniques of  thermographic applications with focus primarily on medical thermography implemented for parametrizing the diagnostics of the human body. The first part of the book describes the basics of infrared thermography, the possibilities of thermographic diagnostics and the physical nature of thermography. The second half includes tools of intelligent engineering applied for the solving of selected applications and projects. Thermographic diagnostics was applied to problematics of paraplegia and tetraplegia and carpal tunnel syndrome (CTS). The results of the research activities were created with the cooperation of the four projects within the Ministry of Education, Science, Research and Sport of the Slovak Republic entitled Digital control of complex systems with two degrees of freedom, Progressive methods of education in the area of control and modeling of complex object oriented systems on aircraft turbocompressor engines, Center for research of control of te...

  6. Introduction to genetic algorithms as a modeling tool

    International Nuclear Information System (INIS)

    Wildberger, A.M.; Hickok, K.A.

    1990-01-01

    Genetic algorithms are search and classification techniques modeled on natural adaptive systems. This is an introduction to their use as a modeling tool with emphasis on prospects for their application in the power industry. It is intended to provide enough background information for its audience to begin to follow technical developments in genetic algorithms and to recognize those which might impact on electric power engineering. Beginning with a discussion of genetic algorithms and their origin as a model of biological adaptation, their advantages and disadvantages are described in comparison with other modeling tools such as simulation and neural networks in order to provide guidance in selecting appropriate applications. In particular, their use is described for improving expert systems from actual data and they are suggested as an aid in building mathematical models. Using the Thermal Performance Advisor as an example, it is suggested how genetic algorithms might be used to make a conventional expert system and mathematical model of a power plant adapt automatically to changes in the plant's characteristics

  7. Calibration of microscopic traffic simulation models using metaheuristic algorithms

    Directory of Open Access Journals (Sweden)

    Miao Yu

    2017-06-01

    Full Text Available This paper presents several metaheuristic algorithms to calibrate a microscopic traffic simulation model. The genetic algorithm (GA, Tabu Search (TS, and a combination of the GA and TS (i.e., warmed GA and warmed TS are implemented and compared. A set of traffic data collected from the I-5 Freeway, Los Angles, California, is used. Objective functions are defined to minimize the difference between simulated and field traffic data which are built based on the flow and speed. Several car-following parameters in VISSIM, which can significantly affect the simulation outputs, are selected to calibrate. A better match to the field measurements is reached with the GA, TS, and warmed GA and TS when comparing with that only using the default parameters in VISSIM. Overall, TS performs very well and can be used to calibrate parameters. Combining metaheuristic algorithms clearly performs better and therefore is highly recommended for calibrating microscopic traffic simulation models.

  8. Algorithms

    Indian Academy of Sciences (India)

    In the program shown in Figure 1, we have repeated the algorithm. M times and we can make the following observations. Each block is essentially a different instance of "code"; that is, the objects differ by the value to which N is initialized before the execution of the. "code" block. Thus, we can now avoid the repetition of the ...

  9. Algorithms

    Indian Academy of Sciences (India)

    algorithms built into the computer corresponding to the logic- circuit rules that are used to .... For the purpose of carrying ou t ari thmetic or logical operations the memory is organized in terms .... In fixed point representation, one essentially uses integer arithmetic operators assuming the binary point to be at some point other ...

  10. An Interactive Personalized Recommendation System Using the Hybrid Algorithm Model

    Directory of Open Access Journals (Sweden)

    Yan Guo

    2017-10-01

    Full Text Available With the rapid development of e-commerce, the contradiction between the disorder of business information and customer demand is increasingly prominent. This study aims to make e-commerce shopping more convenient, and avoid information overload, by an interactive personalized recommendation system using the hybrid algorithm model. The proposed model first uses various recommendation algorithms to get a list of original recommendation results. Combined with the customer’s feedback in an interactive manner, it then establishes the weights of corresponding recommendation algorithms. Finally, the synthetic formula of evidence theory is used to fuse the original results to obtain the final recommendation products. The recommendation performance of the proposed method is compared with that of traditional methods. The results of the experimental study through a Taobao online dress shop clearly show that the proposed method increases the efficiency of data mining in the consumer coverage, the consumer discovery accuracy and the recommendation recall. The hybrid recommendation algorithm complements the advantages of the existing recommendation algorithms in data mining. The interactive assigned-weight method meets consumer demand better and solves the problem of information overload. Meanwhile, our study offers important implications for e-commerce platform providers regarding the design of product recommendation systems.

  11. A randomised controlled trial of oxytocin 5IU and placebo infusion versus oxytocin 5IU and 30IU infusion for the control of blood loss at elective caesarean section--pilot study. ISRCTN 40302163.

    LENUS (Irish Health Repository)

    Murphy, Deirdre J

    2012-02-01

    OBJECTIVE: To compare the blood loss at elective lower segment caesarean section with administration of oxytocin 5IU bolus versus oxytocin 5IU bolus and oxytocin 30IU infusion and to establish whether a large multi-centre trial is feasible. STUDY DESIGN: Women booked for an elective caesarean section were recruited to a pilot randomised controlled trial and randomised to either oxytocin 5IU bolus and placebo infusion or oxytocin 5IU bolus and oxytocin 30IU infusion. We wished to establish whether the study design was feasible and acceptable and to establish sample size estimates for a definitive multi-centre trial. The outcome measures were total estimated blood loss at caesarean section and in the immediate postpartum period and the need for an additional uterotonic agent. RESULTS: A total of 115 women were randomised and 110 were suitable for analysis (5 protocol violations). Despite strict exclusion criteria 84% of the target population were considered eligible for study participation and of those approached only 15% declined to participate and 11% delivered prior to the planned date. The total mean estimated blood loss was lower in the oxytocin infusion arm compared to placebo (567 ml versus 624 ml) and fewer women had a major haemorrhage (>1000 ml, 14% versus 17%) or required an additional uterotonic agent (5% versus 11%). A sample size of 1500 in each arm would be required to demonstrate a 3% absolute reduction in major haemorrhage (from baseline 10%) with >80% power. CONCLUSION: An additional oxytocin infusion at elective caesarean section may reduce blood loss and warrants evaluation in a large multi-centre trial.

  12. Optimisation of Hidden Markov Model using Baum–Welch algorithm ...

    Indian Academy of Sciences (India)

    s12040-016-0780-0. Optimisation of Hidden Markov Model using. Baum–Welch algorithm for prediction of maximum and minimum temperature over Indian Himalaya. J C Joshi1,∗. , Tankeshwar Kumar2, Sunita Srivastava2 and Divya Sachdeva1.

  13. Optimisation of Hidden Markov Model using Baum–Welch algorithm

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 126; Issue 1. Optimisation of Hidden Markov Model using Baum–Welch algorithm for prediction of maximum and minimum temperature over Indian Himalaya. J C Joshi Tankeshwar Kumar Sunita Srivastava Divya Sachdeva. Volume 126 Issue 1 February 2017 ...

  14. Epidemic Processes on Complex Networks : Modelling, Simulation and Algorithms

    NARCIS (Netherlands)

    Van de Bovenkamp, R.

    2015-01-01

    Local interactions on a graph will lead to global dynamic behaviour. In this thesis we focus on two types of dynamic processes on graphs: the Susceptible-Infected-Susceptilbe (SIS) virus spreading model, and gossip style epidemic algorithms. The largest part of this thesis is devoted to the SIS

  15. Optimisation of Transfer Function Models using Genetic Algorithms ...

    African Journals Online (AJOL)

    In order to obtain an optimum transfer function estimate, open source software based on genetic algorithm was developed. The software was developed with Visual Basic programming language. In order to test the software, a transfer function model was developed from data obtained from industry. The forecast obtained ...

  16. Stochastic disturbance rejection in model predictive control by randomized algorithms

    NARCIS (Netherlands)

    Batina, Ivo; Stoorvogel, Antonie Arij; Weiland, Siep

    2001-01-01

    In this paper we consider model predictive control with stochastic disturbances and input constraints. We present an algorithm which can solve this problem approximately but with arbitrary high accuracy. The optimization at each time step is a closed loop optimization and therefore takes into

  17. Iteration Capping For Discrete Choice Models Using the EM Algorithm

    NARCIS (Netherlands)

    Kabatek, J.

    2013-01-01

    The Expectation-Maximization (EM) algorithm is a well-established estimation procedure which is used in many domains of econometric analysis. Recent application in a discrete choice framework (Train, 2008) facilitated estimation of latent class models allowing for very exible treatment of unobserved

  18. Evolving the Topology of Hidden Markov Models using Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Thomsen, Réne

    2002-01-01

    Hidden Markov models (HMM) are widely used for speech recognition and have recently gained a lot of attention in the bioinformatics community, because of their ability to capture the information buried in biological sequences. Usually, heuristic algorithms such as Baum-Welch are used to estimate ...

  19. Models and algorithms for Integration of Vehicle and Crew Scheduling

    NARCIS (Netherlands)

    R. Freling (Richard); D. Huisman (Dennis); A.P.M. Wagelmans (Albert)

    2000-01-01

    textabstractThis paper deals with models, relaxations and algorithms for an integrated approach to vehicle and crew scheduling. We discuss potential benefits of integration and provide an overview of the literature, which considers mainly partial integration. Our approach is new in the sense that we

  20. Heterogenous Agents Model with the Worst Out Algorithm

    Czech Academy of Sciences Publication Activity Database

    Vácha, Lukáš; Vošvrda, Miloslav

    -, č. 8 (2006), s. 3-19 ISSN 1801-5999 Institutional research plan: CEZ:AV0Z10750506 Keywords : efficient market hypothesis * fractal market hypothesis * agents' investment horizons * agents' trading strategies * technical trading rules * heterogeneous agent model with stochastic memory * Worst out algorithm Subject RIV: AH - Economics

  1. Application of genetic algorithm in radio ecological models parameter determination

    International Nuclear Information System (INIS)

    Pantelic, G.

    2006-01-01

    The method of genetic algorithms was used to determine the biological half-life of 137 Cs in cow milk after the accident in Chernobyl. Methodologically genetic algorithms are based on the fact that natural processes tend to optimize themselves and therefore this method should be more efficient in providing optimal solutions in the modeling of radio ecological and environmental events. The calculated biological half-life of 137 Cs in milk is (32 ± 3) days and transfer coefficient from grass to milk is (0.019 ± 0.005). (authors)

  2. Application of genetic algorithm in radio ecological models parameter determination

    Energy Technology Data Exchange (ETDEWEB)

    Pantelic, G. [Institute of Occupatioanl Health and Radiological Protection ' Dr Dragomir Karajovic' , Belgrade (Serbia)

    2006-07-01

    The method of genetic algorithms was used to determine the biological half-life of 137 Cs in cow milk after the accident in Chernobyl. Methodologically genetic algorithms are based on the fact that natural processes tend to optimize themselves and therefore this method should be more efficient in providing optimal solutions in the modeling of radio ecological and environmental events. The calculated biological half-life of 137 Cs in milk is (32 {+-} 3) days and transfer coefficient from grass to milk is (0.019 {+-} 0.005). (authors)

  3. Fuzzy model predictive control algorithm applied in nuclear power plant

    International Nuclear Information System (INIS)

    Zuheir, Ahmad

    2006-01-01

    The aim of this paper is to design a predictive controller based on a fuzzy model. The Takagi-Sugeno fuzzy model with an Adaptive B-splines neuro-fuzzy implementation is used and incorporated as a predictor in a predictive controller. An optimization approach with a simplified gradient technique is used to calculate predictions of the future control actions. In this approach, adaptation of the fuzzy model using dynamic process information is carried out to build the predictive controller. The easy description of the fuzzy model and the easy computation of the gradient sector during the optimization procedure are the main advantages of the computation algorithm. The algorithm is applied to the control of a U-tube steam generation unit (UTSG) used for electricity generation. (author)

  4. Model-based Bayesian signal extraction algorithm for peripheral nerves

    Science.gov (United States)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10–20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of

  5. Performance modeling of parallel algorithms for solving neutron diffusion problems

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1995-01-01

    Neutron diffusion calculations are the most common computational methods used in the design, analysis, and operation of nuclear reactors and related activities. Here, mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers

  6. Potts-model grain growth simulations: Parallel algorithms and applications

    Energy Technology Data Exchange (ETDEWEB)

    Wright, S.A.; Plimpton, S.J.; Swiler, T.P. [and others

    1997-08-01

    Microstructural morphology and grain boundary properties often control the service properties of engineered materials. This report uses the Potts-model to simulate the development of microstructures in realistic materials. Three areas of microstructural morphology simulations were studied. They include the development of massively parallel algorithms for Potts-model grain grow simulations, modeling of mass transport via diffusion in these simulated microstructures, and the development of a gradient-dependent Hamiltonian to simulate columnar grain growth. Potts grain growth models for massively parallel supercomputers were developed for the conventional Potts-model in both two and three dimensions. Simulations using these parallel codes showed self similar grain growth and no finite size effects for previously unapproachable large scale problems. In addition, new enhancements to the conventional Metropolis algorithm used in the Potts-model were developed to accelerate the calculations. These techniques enable both the sequential and parallel algorithms to run faster and use essentially an infinite number of grain orientation values to avoid non-physical grain coalescence events. Mass transport phenomena in polycrystalline materials were studied in two dimensions using numerical diffusion techniques on microstructures generated using the Potts-model. The results of the mass transport modeling showed excellent quantitative agreement with one dimensional diffusion problems, however the results also suggest that transient multi-dimension diffusion effects cannot be parameterized as the product of the grain boundary diffusion coefficient and the grain boundary width. Instead, both properties are required. Gradient-dependent grain growth mechanisms were included in the Potts-model by adding an extra term to the Hamiltonian. Under normal grain growth, the primary driving term is the curvature of the grain boundary, which is included in the standard Potts-model Hamiltonian.

  7. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    International Nuclear Information System (INIS)

    Rolland, Joran; Simonnet, Eric

    2015-01-01

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations

  8. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  9. A Genetic Algorithm Approach for Modeling a Grounding Electrode

    Science.gov (United States)

    Mishra, Arbind Kumar; Nagaoka, Naoto; Ametani, Akihiro

    This paper has proposed a genetic algorithm based approach to determine a grounding electrode model circuit composed of resistances, inductances and capacitances. The proposed methodology determines the model circuit parameters based on a general ladder circuit directly from a measured result. Transient voltages of some electrodes were measured when applying a step like current. An EMTP simulation of a transient voltage on the grounding electrode has been carried out by adopting the proposed model circuits. The accuracy of the proposed method has been confirmed to be high in comparison with the measured transient voltage.

  10. A comparison of updating algorithms for large $N$ reduced models

    CERN Document Server

    Pérez, Margarita García; Keegan, Liam; Okawa, Masanori; Ramos, Alberto

    2015-01-01

    We investigate Monte Carlo updating algorithms for simulating $SU(N)$ Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole $SU(N)$ matrix at once, or iterating through $SU(2)$ subgroups of the $SU(N)$ matrix, we find the same critical exponent in both cases, and only a slight difference between the two.

  11. Sustainable logistics and transportation optimization models and algorithms

    CERN Document Server

    Gakis, Konstantinos; Pardalos, Panos

    2017-01-01

    Focused on the logistics and transportation operations within a supply chain, this book brings together the latest models, algorithms, and optimization possibilities. Logistics and transportation problems are examined within a sustainability perspective to offer a comprehensive assessment of environmental, social, ethical, and economic performance measures. Featured models, techniques, and algorithms may be used to construct policies on alternative transportation modes and technologies, green logistics, and incentives by the incorporation of environmental, economic, and social measures. Researchers, professionals, and graduate students in urban regional planning, logistics, transport systems, optimization, supply chain management, business administration, information science, mathematics, and industrial and systems engineering will find the real life and interdisciplinary issues presented in this book informative and useful.

  12. Dynamic greedy algorithms for the Edwards-Anderson model

    Science.gov (United States)

    Schnabel, Stefan; Janke, Wolfhard

    2017-11-01

    To provide a novel tool for the investigation of the energy landscape of the Edwards-Anderson spin-glass model we introduce an algorithm that allows an efficient execution of a greedy optimization based on data from a previously performed optimization for a similar configuration. As an application we show how the technique can be used to perform higher-order greedy optimizations and simulated annealing searches with improved performance.

  13. Managing and learning with multiple models: Objectives and optimization algorithms

    Science.gov (United States)

    Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.

    2011-01-01

    The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.

  14. Software Piracy Detection Model Using Ant Colony Optimization Algorithm

    Science.gov (United States)

    Astiqah Omar, Nor; Zakuan, Zeti Zuryani Mohd; Saian, Rizauddin

    2017-06-01

    Internet enables information to be accessible anytime and anywhere. This scenario creates an environment whereby information can be easily copied. Easy access to the internet is one of the factors which contribute towards piracy in Malaysia as well as the rest of the world. According to a survey conducted by Compliance Gap BSA Global Software Survey in 2013 on software piracy, found out that 43 percent of the software installed on PCs around the world was not properly licensed, the commercial value of the unlicensed installations worldwide was reported to be 62.7 billion. Piracy can happen anywhere including universities. Malaysia as well as other countries in the world is faced with issues of piracy committed by the students in universities. Piracy in universities concern about acts of stealing intellectual property. It can be in the form of software piracy, music piracy, movies piracy and piracy of intellectual materials such as books, articles and journals. This scenario affected the owner of intellectual property as their property is in jeopardy. This study has developed a classification model for detecting software piracy. The model was developed using a swarm intelligence algorithm called the Ant Colony Optimization algorithm. The data for training was collected by a study conducted in Universiti Teknologi MARA (Perlis). Experimental results show that the model detection accuracy rate is better as compared to J48 algorithm.

  15. Motion Model Employment using interacting Motion Model Algorithm

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar

    2006-01-01

    model being correct is computed through a likelihood function for each model.  The study presented a simple technique to introduce additional models into the system using deterministic acceleration which basically defines the dynamics of the system.  Therefore, based on this value more motion models can...

  16. Earthquake forecast models for Italy based on the RI algorithm

    Directory of Open Access Journals (Sweden)

    Kazuyoshi Z. Nanjo

    2010-11-01

    Full Text Available This study provides an overview of relative-intensity (RI-based earthquake forecast models that have been submitted for the 5-year and 10-year testing classes and the 3-month class of the Italian experiment within the Collaboratory for the Study of Earthquake Predictability (CSEP. The RI algorithm starts as a binary forecast system based on the working assumption that future large earthquakes are considered likely to occur at sites of higher seismic activity in the past. The measure of RI is the simply counting of the number of past earthquakes, which is known as the RI of seismicity. To improve the RI forecast performance, we first expand the RI algorithm to become part of a general class of smoothed seismicity models. We then convert the RI representation from a binary system into a testable CSEP model that forecasts the numbers of earthquakes for the predefined magnitudes. Our parameter tuning for the CSEP models is based on the past seismicity. The final submission is a set of two numerical data files that were created by tuned 5-year and 10-year models and an executable computer code of a tuned 3-month model, to examine which testing class is more meaningful in terms of the RI hypothesis. The main purpose of our participation is to better understand the importance (or lack of importance of RI of seismicity for earthquake forecastability.

  17. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  18. Development and evaluation of thermal model reduction algorithms for spacecraft

    Science.gov (United States)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  19. A Multiple Model Prediction Algorithm for CNC Machine Wear PHM

    Directory of Open Access Journals (Sweden)

    Huimin Chen

    2011-01-01

    Full Text Available The 2010 PHM data challenge focuses on the remaining useful life (RUL estimation for cutters of a high speed CNC milling machine using measurements from dynamometer, accelerometer, and acoustic emission sensors. We present a multiple model approach for wear depth estimation of milling machine cutters using the provided data. The feature selection, initial wear estimation and multiple model fusion components of the proposed algorithm are explained in details and compared with several alternative methods using the training data. The final submission ranked #2 among professional and student participants and the method is applicable to other data driven PHM problems.

  20. Linguistically motivated statistical machine translation models and algorithms

    CERN Document Server

    Xiong, Deyi

    2015-01-01

    This book provides a wide variety of algorithms and models to integrate linguistic knowledge into Statistical Machine Translation (SMT). It helps advance conventional SMT to linguistically motivated SMT by enhancing the following three essential components: translation, reordering and bracketing models. It also serves the purpose of promoting the in-depth study of the impacts of linguistic knowledge on machine translation. Finally it provides a systematic introduction of Bracketing Transduction Grammar (BTG) based SMT, one of the state-of-the-art SMT formalisms, as well as a case study of linguistically motivated SMT on a BTG-based platform.

  1. Comparison of evolutionary algorithms in gene regulatory network model inference.

    LENUS (Irish Health Repository)

    2010-01-01

    ABSTRACT: BACKGROUND: The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. RESULTS: This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. CONCLUSIONS: Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.

  2. IIR Filter Modeling Using an Algorithm Inspired on Electromagnetism

    Directory of Open Access Journals (Sweden)

    Cuevas-Jiménez E.

    2013-01-01

    Full Text Available Infinite-impulse-response (IIR filtering provides a powerful approach for solving a variety of problems. However, its design represents a very complicated task, since the error surface of IIR filters is generally multimodal, global optimization techniques are required in order to avoid local minima. In this paper, a new method based on the Electromagnetism-Like Optimization Algorithm (EMO is proposed for IIR filter modeling. EMO originates from the electro-magnetism theory of physics by assuming potential solutions as electrically charged particles which spread around the solution space. The charge of each particle depends on its objective function value. This algorithm employs a collective attraction-repulsion mechanism to move the particles towards optimality. The experimental results confirm the high performance of the proposed method in solving various benchmark identification problems.

  3. High speed railway track dynamics models, algorithms and applications

    CERN Document Server

    Lei, Xiaoyan

    2017-01-01

    This book systematically summarizes the latest research findings on high-speed railway track dynamics, made by the author and his research team over the past decade. It explores cutting-edge issues concerning the basic theory of high-speed railways, covering the dynamic theories, models, algorithms and engineering applications of the high-speed train and track coupling system. Presenting original concepts, systematic theories and advanced algorithms, the book places great emphasis on the precision and completeness of its content. The chapters are interrelated yet largely self-contained, allowing readers to either read through the book as a whole or focus on specific topics. It also combines theories with practice to effectively introduce readers to the latest research findings and developments in high-speed railway track dynamics. It offers a valuable resource for researchers, postgraduates and engineers in the fields of civil engineering, transportation, highway & railway engineering.

  4. A model of algorithmic representation of a business process

    Directory of Open Access Journals (Sweden)

    E. I. Koshkarova

    2014-01-01

    Full Text Available This article presents and justifies the possibility of developing a method for estimation and optimization of an enterprise business processes; the proposed method is based on identity of two notions – an algorithm and a business process. The described method relies on extraction of a recursive model from the business process, based on the example of one process automated by the BPM system and further estimation and optimization of that process in accordance with estimation and optimization techniques applied to algorithms. The results of this investigation could be used by experts working in the field of reengineering of enterprise business processes, automation of business processes along with development of enterprise informational systems.

  5. Model order reduction using eigen algorithm | Singh | International ...

    African Journals Online (AJOL)

    -scale dynamic systems where denominator polynomial determined through Eigen algorithm and numerator polynomial via factor division algorithm. In Eigen algorithm, the most dominant Eigen value of both original and reduced order ...

  6. Modelling Paleoearthquake Slip Distributions using a Gentic Algorithm

    Science.gov (United States)

    Lindsay, Anthony; Simão, Nuno; McCloskey, John; Nalbant, Suleyman; Murphy, Shane; Bhloscaidh, Mairead Nic

    2013-04-01

    Along the Sunda trench, the annual growth rings of coral microatolls store long term records of tectonic deformation. Spread over large areas of an active megathrust fault, they offer the possibility of high resolution reconstructions of slip for a number of paleo-earthquakes. These data are complex with spatial and temporal variations in uncertainty. Rather than assuming that any one model will uniquely fit the data, Monte Carlo Slip Estimation (MCSE) modelling produces a catalogue of possible models for each event. From each earthquake's catalogue, a model is selected and a possible history of slip along the fault reconstructed. By generating multiple histories, then finding the average slip during each earthquake, a probabilistic history of slip along the fault can be generated and areas that may have a large slip deficit identified. However, the MCSE technique requires the production of many hundreds of billions of models to yield the few models that fit the observed coral data. In an attempt to accelerate this process, we have designed a Genetic Algorithm (GA). The GA uses evolutionary operators to recombine the information held by a population of possible slip models to produce a set of new models, based on how well they reproduce a set of coral deformation data. Repeated iterations of the algorithm produce populations of improved models, each generation better satisfying the coral data. Preliminary results have shown the GA to be capable of recovering synthetically generated slip distributions based their displacements of sets of corals faster than the MCSE technique. The results of the systematic testing of the GA technique and its performance using both synthetic and observed coral displacement data will be presented.

  7. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.

    2017-03-13

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it\\'s time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  8. Focuss algorithm application in kinetic compartment modeling for PET tracer

    International Nuclear Information System (INIS)

    Huang Xinrui; Bao Shanglian

    2004-01-01

    Molecular imaging is in the process of becoming. Its application mostly depends on the molecular discovery process of imaging probes and drugs, from the mouse to the patient, from research to clinical practice. Positron emission tomography (PET) can non-invasively monitor . pharmacokinetic and functional processes of drugs in intact organisms at tracer concentrations by kinetic modeling. It has been known that for all biological systems, linear or nonlinear, if the system is injected by a tracer in a steady state, the distribution of the tracer follows the kinetics of a linear compartmental system, which has sums of exponential solutions. Based on the general compartmental description of the tracer's fate in vivo, we presented a novel kinetic modeling approach for the quantification of in vivo tracer studies with dynamic positron emission tomography (PET), which can determine a parsimonious model consisting with the measured data. This kinetic modeling technique allows for estimation of parametric images from a voxel based analysis and requires no a priori decision about the tracer's fate in vivo, instead determining the most appropriate model from the information contained within the kinetic data. Choosing a set of exponential functions, convolved with the plasma input function, as basis functions, the time activity curve of a region or a pixel can be written as a linear combination of the basis functions with corresponding coefficients. The number of non-zero coefficients returned corresponds to the model order which is related to the number of tissue compartments. The system macro parameters are simply determined using the focal underdetermined system solver (FOCUSS) algorithm. The FOCUSS algorithm is a nonparametric algorithm for finding localized energy solutions from limited data and is a recursive linear estimation procedure. FOCUSS algorithm usually converges very fast, so demands a few iterations. The effectiveness is verified by simulation and clinical

  9. 300,000 IU or 600,000 IU of oral vitamin D3 for treatment of nutritional rickets: a randomized controlled trial.

    Science.gov (United States)

    Mittal, Hema; Rai, Sunita; Shah, Dheeraj; Madhu, S V; Mehrotra, Gopesh; Malhotra, Rajeev Kumar; Gupta, Piyush

    2014-04-01

    To evaluate the non-inferiority of a lower therapeutic dose (300,000 IU) in comparison to standard dose (600,000) IU of Vitamin D for increasing serum 25(OH) D levels and achieving radiological recovery in nutritional rickets. Randomized, open-labeled, controlled trial. Tertiary care hospital. 76 children (median age 12 mo) with clinical and radiologically confirmed rickets. Oral vitamin D3 as 300,000 IU (Group 1; n=38) or 600,000 IU (Group 2; n=38) in a single day. Primary: Serum 25(OH)D, 12 weeks after administration of vitamin D3; Secondary: Radiological healing and serum parathormone at 12 weeks; and clinical and biochemical adverse effects. Serum 25(OH)D levels [geometric mean (95% CI)] increased significantly from baseline to 12 weeks after therapy in both the groups [Group 1: 7.58 (5.50–10.44) to 16.06 (12.71– 20.29) ng/mL, Palkaline phosphatase levels at 12 weeks. Relative change [ratio of geometric mean (95% CI)] in serum PTH and alkaline phosphatase, 12 weeks after therapy, were 0.98 (0.7–1.47) and 0.92 (0.72–1.19), respectively. The serum 25(OH)D levels were deficient (<20 ng/mL) in 63% (38/60) children after 12 weeks of intervention [Group 1: 20/32 (62.5%); Group 2: 18/28 (64.3%)]. No major clinical adverse effects were noticed in any of the children. Hypercalcemia was documented in 2 children at 4 weeks (1 in each Group) and 3 children at 12 weeks (1 in Group 1 and 2 in Group 2). None of the participants had hypercalciuria or hypervitaminosis D. A dose of 300,000 IU of vitamin D3 is comparable to 600,000 IU, administered orally, over a single day, for treating rickets in under-five children although there is an unacceptably high risk of hypercalcemia in both groups. None of the regime is effective in normalization of vitamin D status in majority of patients, 3 months after administering the therapeutic dose.

  10. From Point Clouds to Architectural Models: Algorithms for Shape Reconstruction

    Science.gov (United States)

    Canciani, M.; Falcolini, C.; Saccone, M.; Spadafora, G.

    2013-02-01

    The use of terrestrial laser scanners in architectural survey applications has become more and more common. Row data complexity, as given by scanner restitution, leads to several problems about design and 3D-modelling starting from Point Clouds. In this context we present a study on architectural sections and mathematical algorithms for their shape reconstruction, according to known or definite geometrical rules, focusing on shapes of different complexity. Each step of the semi-automatic algorithm has been developed using Mathematica software and CAD, integrating both programs in order to reconstruct a geometrical CAD model of the object. Our study is motivated by the fact that, for architectural survey, most of three dimensional modelling procedures concerning point clouds produce superabundant, but often unnecessary, information and are also very expensive in terms of cpu time using more and more sophisticated hardware and software. On the contrary, it's important to simplify/decimate the point cloud in order to recognize a particular form out of some definite geometric/architectonic shapes. Such a process consists of several steps: first the definition of plane sections and characterization of their architecture; secondly the construction of a continuous plane curve depending on some parameters. In the third step we allow the selection on the curve of some nodal points with given specific characteristics (symmetry, tangency conditions, shadowing exclusion, corners, … ). The fourth and last step is the construction of a best shape defined by the comparison with an abacus of known geometrical elements, such as moulding profiles, leading to a precise architectonical section. The algorithms have been developed and tested in very different situations and are presented in a case study of complex geometries such as some mouldings profiles in the Church of San Carlo alle Quattro Fontane.

  11. Modeling the Swift Bat Trigger Algorithm with Machine Learning

    Science.gov (United States)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2016-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift / BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of greater than or equal to 97 percent (less than or equal to 3 percent error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6 percent (10.4 percent error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of n (sub 0) approaching 0.48 (sup plus 0.41) (sub minus 0.23) per cubic gigaparsecs per year with power-law indices of n (sub 1) approaching 1.7 (sup plus 0.6) (sub minus 0.5) and n (sub 2) approaching minus 5.9 (sup plus 5.7) (sub minus 0.1) for GRBs above and below a break point of z (redshift) (sub 1) approaching 6.8 (sup plus 2.8) (sub minus 3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.

  12. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  13. Development of modelling algorithm of technological systems by statistical tests

    Science.gov (United States)

    Shemshura, E. A.; Otrokov, A. V.; Chernyh, V. G.

    2018-03-01

    The paper tackles the problem of economic assessment of design efficiency regarding various technological systems at the stage of their operation. The modelling algorithm of a technological system was performed using statistical tests and with account of the reliability index allows estimating the level of machinery technical excellence and defining the efficiency of design reliability against its performance. Economic feasibility of its application shall be determined on the basis of service quality of a technological system with further forecasting of volumes and the range of spare parts supply.

  14. Heterogeneous Agents Model with the Worst Out Algorithm

    Czech Academy of Sciences Publication Activity Database

    Vošvrda, Miloslav; Vácha, Lukáš

    I, č. 1 (2007), s. 54-66 ISSN 1802-4696 R&D Projects: GA MŠk(CZ) LC06075; GA ČR(CZ) GA402/06/0990 Grant - others:GA UK(CZ) 454/2004/A-EK/FSV Institutional research plan: CEZ:AV0Z10750506 Keywords : Efficient Markets Hypothesis * Fractal Market Hypothesis * agents' investment horizons * agents' trading strategies * technical trading rules * heterogeneous agent model with stochastic memory * Worst out Algorithm Subject RIV: AH - Economics

  15. Two-Stage Electricity Demand Modeling Using Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Krzysztof Gajowniczek

    2017-10-01

    Full Text Available Forecasting of electricity demand has become one of the most important areas of research in the electric power industry, as it is a critical component of cost-efficient power system management and planning. In this context, accurate and robust load forecasting is supposed to play a key role in reducing generation costs, and deals with the reliability of the power system. However, due to demand peaks in the power system, forecasts are inaccurate and prone to high numbers of errors. In this paper, our contributions comprise a proposed data-mining scheme for demand modeling through peak detection, as well as the use of this information to feed the forecasting system. For this purpose, we have taken a different approach from that of time series forecasting, representing it as a two-stage pattern recognition problem. We have developed a peak classification model followed by a forecasting model to estimate an aggregated demand volume. We have utilized a set of machine learning algorithms to benefit from both accurate detection of the peaks and precise forecasts, as applied to the Polish power system. The key finding is that the algorithms can detect 96.3% of electricity peaks (load value equal to or above the 99th percentile of the load distribution and deliver accurate forecasts, with mean absolute percentage error (MAPE of 3.10% and resistant mean absolute percentage error (r-MAPE of 2.70% for the 24 h forecasting horizon.

  16. Using genetic algorithms to calibrate a water quality model.

    Science.gov (United States)

    Liu, Shuming; Butler, David; Brazier, Richard; Heathwaite, Louise; Khu, Soon-Thiam

    2007-03-15

    With the increasing concern over the impact of diffuse pollution on water bodies, many diffuse pollution models have been developed in the last two decades. A common obstacle in using such models is how to determine the values of the model parameters. This is especially true when a model has a large number of parameters, which makes a full range of calibration expensive in terms of computing time. Compared with conventional optimisation approaches, soft computing techniques often have a faster convergence speed and are more efficient for global optimum searches. This paper presents an attempt to calibrate a diffuse pollution model using a genetic algorithm (GA). Designed to simulate the export of phosphorus from diffuse sources (agricultural land) and point sources (human), the Phosphorus Indicators Tool (PIT) version 1.1, on which this paper is based, consisted of 78 parameters. Previous studies have indicated the difficulty of full range model calibration due to the number of parameters involved. In this paper, a GA was employed to carry out the model calibration in which all parameters were involved. A sensitivity analysis was also performed to investigate the impact of operators in the GA on its effectiveness in optimum searching. The calibration yielded satisfactory results and required reasonable computing time. The application of the PIT model to the Windrush catchment with optimum parameter values was demonstrated. The annual P loss was predicted as 4.4 kg P/ha/yr, which showed a good fitness to the observed value.

  17. A new parallelization algorithm of ocean model with explicit scheme

    Science.gov (United States)

    Fu, X. D.

    2017-08-01

    This paper will focus on the parallelization of ocean model with explicit scheme which is one of the most commonly used schemes in the discretization of governing equation of ocean model. The characteristic of explicit schema is that calculation is simple, and that the value of the given grid point of ocean model depends on the grid point at the previous time step, which means that one doesn’t need to solve sparse linear equations in the process of solving the governing equation of the ocean model. Aiming at characteristics of the explicit scheme, this paper designs a parallel algorithm named halo cells update with tiny modification of original ocean model and little change of space step and time step of the original ocean model, which can parallelize ocean model by designing transmission module between sub-domains. This paper takes the GRGO for an example to implement the parallelization of GRGO (Global Reduced Gravity Ocean model) with halo update. The result demonstrates that the higher speedup can be achieved at different problem size.

  18. Genetic Algorithm Optimization of Artificial Neural Networks for Hydrological Modelling

    Science.gov (United States)

    Abrahart, R. J.

    2004-05-01

    This paper will consider the case for genetic algorithm optimization in the development of an artificial neural network model. It will provide a methodological evaluation of reported investigations with respect to hydrological forecasting and prediction. The intention in such operations is to develop a superior modelling solution that will be: \\begin{itemize} more accurate in terms of output precision and model estimation skill; more tractable in terms of personal requirements and end-user control; and/or more robust in terms of conceptual and mechanical power with respect to adverse conditions. The genetic algorithm optimization toolbox could be used to perform a number of specific roles or purposes and it is the harmonious and supportive relationship between neural networks and genetic algorithms that will be highlighted and assessed. There are several neural network mechanisms and procedures that could be enhanced and potential benefits are possible at different stages in the design and construction of an operational hydrological model e.g. division of inputs; identification of structure; initialization of connection weights; calibration of connection weights; breeding operations between successful models; and output fusion associated with the development of ensemble solutions. Each set of opportunities will be discussed and evaluated. Two strategic questions will also be considered: [i] should optimization be conducted as a set of small individual procedures or as one large holistic operation; [ii] what specific function or set of weighted vectors should be optimized in a complex software product e.g. timings, volumes, or quintessential hydrological attributes related to the 'problem situation' - that might require the development flood forecasting, drought estimation, or record infilling applications. The paper will conclude with a consideration of hydrological forecasting solutions developed on the combined methodologies of co-operative co-evolution and

  19. A MATLAB GUI based algorithm for modelling Magnetotelluric data

    Science.gov (United States)

    Timur, Emre; Onsen, Funda

    2016-04-01

    The magnetotelluric method is an electromagnetic survey technique that images the electrical resistivity distribution of layers in subsurface depths. Magnetotelluric method measures simultaneously total electromagnetic field components such as both time-varying magnetic field B(t) and induced electric field E(t). At the same time, forward modeling of magnetotelluric method is so beneficial for survey planning purpose, for comprehending the method, especially for students, and as part of an iteration process in inverting measured data. The MTINV program can be used to model and to interpret geophysical electromagnetic (EM) magnetotelluric (MT) measurements using a horizontally layered earth model. This program uses either the apparent resistivity and phase components of the MT data together or the apparent resistivity data alone. Parameter optimization, which is based on linearized inversion method, can be utilized in 1D interpretations. In this study, a new MATLAB GUI based algorithm has been written for the 1D-forward modeling of magnetotelluric response function for multiple layers to use in educational studies. The code also includes an automatic Gaussian noise option for a demanded ratio value. Numerous applications were carried out and presented for 2,3 and 4 layer models and obtained theoretical data were interpreted using MTINV, in order to evaluate the initial parameters and effect of noise. Keywords: Education, Forward Modelling, Inverse Modelling, Magnetotelluric

  20. "Updates to Model Algorithms & Inputs for the Biogenic ...

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.

  1. Epidemic Modelling by Ripple-Spreading Network and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Jian-Qin Liao

    2013-01-01

    Full Text Available Mathematical analysis and modelling is central to infectious disease epidemiology. This paper, inspired by the natural ripple-spreading phenomenon, proposes a novel ripple-spreading network model for the study of infectious disease transmission. The new epidemic model naturally has good potential for capturing many spatial and temporal features observed in the outbreak of plagues. In particular, using a stochastic ripple-spreading process simulates the effect of random contacts and movements of individuals on the probability of infection well, which is usually a challenging issue in epidemic modeling. Some ripple-spreading related parameters such as threshold and amplifying factor of nodes are ideal to describe the importance of individuals’ physical fitness and immunity. The new model is rich in parameters to incorporate many real factors such as public health service and policies, and it is highly flexible to modifications. A genetic algorithm is used to tune the parameters of the model by referring to historic data of an epidemic. The well-tuned model can then be used for analyzing and forecasting purposes. The effectiveness of the proposed method is illustrated by simulation results.

  2. Geometric algorithms for electromagnetic modeling of large scale structures

    Science.gov (United States)

    Pingenot, James

    With the rapid increase in the speed and complexity of integrated circuit designs, 3D full wave and time domain simulation of chip, package, and board systems becomes more and more important for the engineering of modern designs. Much effort has been applied to the problem of electromagnetic (EM) simulation of such systems in recent years. Major advances in boundary element EM simulations have led to O(n log n) simulations using iterative methods and advanced Fast. Fourier Transform (FFT), Multi-Level Fast Multi-pole Methods (MLFMM), and low-rank matrix compression techniques. These advances have been augmented with an explosion of multi-core and distributed computing technologies, however, realization of the full scale of these capabilities has been hindered by cumbersome and inefficient geometric processing. Anecdotal evidence from industry suggests that users may spend around 80% of turn-around time manipulating the geometric model and mesh. This dissertation addresses this problem by developing fast and efficient data structures and algorithms for 3D modeling of chips, packages, and boards. The methods proposed here harness the regular, layered 2D nature of the models (often referred to as "2.5D") to optimize these systems for large geometries. First, an architecture is developed for efficient storage and manipulation of 2.5D models. The architecture gives special attention to native representation of structures across various input models and special issues particular to 3D modeling. The 2.5D structure is then used to optimize the mesh systems First, circuit/EM co-simulation techniques are extended to provide electrical connectivity between objects. This concept is used to connect independently meshed layers, allowing simple and efficient 2D mesh algorithms to be used in creating a 3D mesh. Here, adaptive meshing is used to ensure that the mesh accurately models the physical unknowns (current and charge). Utilizing the regularized nature of 2.5D objects and

  3. Using the fuzzy modeling for the retrieval algorithms

    International Nuclear Information System (INIS)

    Mohamed, A.H

    2010-01-01

    A rapid growth in number and size of images in databases and world wide web (www) has created a strong need for more efficient search and retrieval systems to exploit the benefits of this large amount of information. However, the collection of this information is now based on the image technology. One of the limitations of the current image analysis techniques necessitates that most image retrieval systems use some form of text description provided by the users as the basis to index and retrieve images. To overcome this problem, the proposed system introduces the using of fuzzy modeling to describe the image by using the linguistic ambiguities. Also, the proposed system can include vague or fuzzy terms in modeling the queries to match the image descriptions in the retrieval process. This can facilitate the indexing and retrieving process, increase their performance and decrease its computational time . Therefore, the proposed system can improve the performance of the traditional image retrieval algorithms.

  4. Integer programming model for optimizing bus timetable using genetic algorithm

    Science.gov (United States)

    Wihartiko, F. D.; Buono, A.; Silalahi, B. P.

    2017-01-01

    Bus timetable gave an information for passengers to ensure the availability of bus services. Timetable optimal condition happened when bus trips frequency could adapt and suit with passenger demand. In the peak time, the number of bus trips would be larger than the off-peak time. If the number of bus trips were more frequent than the optimal condition, it would make a high operating cost for bus operator. Conversely, if the number of trip was less than optimal condition, it would make a bad quality service for passengers. In this paper, the bus timetabling problem would be solved by integer programming model with modified genetic algorithm. Modification was placed in the chromosomes design, initial population recovery technique, chromosomes reconstruction and chromosomes extermination on specific generation. The result of this model gave the optimal solution with accuracy 99.1%.

  5. Computational Analysis of 3D Ising Model Using Metropolis Algorithms

    International Nuclear Information System (INIS)

    Sonsin, A F; Cortes, M R; Nunes, D R; Gomes, J V; Costa, R S

    2015-01-01

    We simulate the Ising Model with the Monte Carlo method and use the algorithms of Metropolis to update the distribution of spins. We found that, in the specific case of the three-dimensional Ising Model, methods of Metropolis are efficient. Studying the system near the point of phase transition, we observe that the magnetization goes to zero. In our simulations we analyzed the behavior of the magnetization and magnetic susceptibility to verify the phase transition in a paramagnetic to ferromagnetic material. The behavior of the magnetization and of the magnetic susceptibility as a function of the temperature suggest a phase transition around KT/J ≈ 4.5 and was evidenced the problem of finite size of the lattice to work with large lattice. (paper)

  6. Dataflow-Driven Crowdsourcing: Relational Models and Algorithms

    Directory of Open Access Journals (Sweden)

    D. A. Ustalov

    2016-01-01

    Full Text Available Recently, microtask crowdsourcing has become a popular approach for addressing various data mining problems. Crowdsourcing workflows for approaching such problems are composed of several data processing stages which require consistent representation for making the work reproducible. This paper is devoted to the problem of reproducibility and formalization of the microtask crowdsourcing process. A computational model for microtask crowdsourcing based on an extended relational model and a dataflow computational model has been proposed. The proposed collaborative dataflow computational model is designed for processing the input data sources by executing annotation stages and automatic synchronization stages simultaneously. Data processing stages and connections between them are expressed by using collaborative computation workflows represented as loosely connected directed acyclic graphs. A synchronous algorithm for executing such workflows has been described. The computational model has been evaluated by applying it to two tasks from the computational linguistics field: concept lexicalization refining in electronic thesauri and establishing hierarchical relations between such concepts. The “Add–Remove–Confirm” procedure is designed for adding the missing lexemes to the concepts while removing the odd ones. The “Genus–Species–Match” procedure is designed for establishing “is-a” relations between the concepts provided with the corresponding word pairs. The experiments involving both volunteers from popular online social networks and paid workers from crowdsourcing marketplaces confirm applicability of these procedures for enhancing lexical resources. 

  7. Toward Developing Genetic Algorithms to Aid in Critical Infrastructure Modeling

    Energy Technology Data Exchange (ETDEWEB)

    2007-05-01

    Today’s society relies upon an array of complex national and international infrastructure networks such as transportation, telecommunication, financial and energy. Understanding these interdependencies is necessary in order to protect our critical infrastructure. The Critical Infrastructure Modeling System, CIMS©, examines the interrelationships between infrastructure networks. CIMS© development is sponsored by the National Security Division at the Idaho National Laboratory (INL) in its ongoing mission for providing critical infrastructure protection and preparedness. A genetic algorithm (GA) is an optimization technique based on Darwin’s theory of evolution. A GA can be coupled with CIMS© to search for optimum ways to protect infrastructure assets. This includes identifying optimum assets to enforce or protect, testing the addition of or change to infrastructure before implementation, or finding the optimum response to an emergency for response planning. This paper describes the addition of a GA to infrastructure modeling for infrastructure planning. It first introduces the CIMS© infrastructure modeling software used as the modeling engine to support the GA. Next, the GA techniques and parameters are defined. Then a test scenario illustrates the integration with CIMS© and the preliminary results.

  8. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  9. An efficient algorithm for corona simulation with complex chemical models

    Science.gov (United States)

    Villa, Andrea; Barbieri, Luca; Gondola, Marco; Leon-Garzon, Andres R.; Malgesini, Roberto

    2017-05-01

    The simulation of cold plasma discharges is a leading field of applied sciences with many applications ranging from pollutant control to surface treatment. Many of these applications call for the development of novel numerical techniques to implement fully three-dimensional corona solvers that can utilize complex and physically detailed chemical databases. This is a challenging task since it multiplies the difficulties inherent to a three-dimensional approach by the complexity of databases comprising tens of chemical species and hundreds of reactions. In this paper a novel approach, capable of reducing significantly the computational burden, is developed. The proposed method is based on a proper time stepping algorithm capable of decomposing the original problem into simpler ones: each of them has then been tackled with either finite element, finite volume or ordinary differential equations solvers. This last solver deals with the chemical model and its efficient implementation is one of the main contributions of this work.

  10. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    Directory of Open Access Journals (Sweden)

    Gonglin Yuan

    Full Text Available Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1 βk ≥ 0 2 the search direction has the trust region property without the use of any line search method 3 the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  11. Calibration of Uncertainty Analysis of the SWAT Model Using Genetic Algorithms and Bayesian Model Averaging

    Science.gov (United States)

    In this paper, the Genetic Algorithms (GA) and Bayesian model averaging (BMA) were combined to simultaneously conduct calibration and uncertainty analysis for the Soil and Water Assessment Tool (SWAT). In this hybrid method, several SWAT models with different structures are first selected; next GA i...

  12. Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm

    NARCIS (Netherlands)

    Jansen, R.C.

    A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical

  13. GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2011-01-01

    Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image χ 2 and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest χ 2 is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.

  14. The use of genetic algorithms to model protoplanetary discs

    Science.gov (United States)

    Hetem, Annibal; Gregorio-Hetem, Jane

    2007-12-01

    The protoplanetary discs of T Tauri and Herbig Ae/Be stars have previously been studied using geometric disc models to fit their spectral energy distribution (SED). The simulations provide a means to reproduce the signatures of various circumstellar structures, which are related to different levels of infrared excess. With the aim of improving our previous model, which assumed a simple flat-disc configuration, we adopt here a reprocessing flared-disc model that assumes hydrostatic, radiative equilibrium. We have developed a method to optimize the parameter estimation based on genetic algorithms (GAs). This paper describes the implementation of the new code, which has been applied to Herbig stars from the Pico dos Dias Survey catalogue, in order to illustrate the quality of the fitting for a variety of SED shapes. The star AB Aur was used as a test of the GA parameter estimation, and demonstrates that the new code reproduces successfully a canonical example of the flared-disc model. The GA method gives a good quality of fit, but the range of input parameters must be chosen with caution, as unrealistic disc parameters can be derived. It is confirmed that the flared-disc model fits the flattened SEDs typical of Herbig stars; however, embedded objects (increasing SED slope) and debris discs (steeply decreasing SED slope) are not well fitted with this configuration. Even considering the limitation of the derived parameters, the automatic process of SED fitting provides an interesting tool for the statistical analysis of the circumstellar luminosity of large samples of young stars.

  15. The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models

    OpenAIRE

    GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.

    2008-01-01

    In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.

  16. A simple and efficient parallel FFT algorithm using the BSP model

    NARCIS (Netherlands)

    Bisseling, R.H.; Inda, M.A.

    2000-01-01

    In this paper we present a new parallel radix FFT algorithm based on the BSP model Our parallel algorithm uses the groupcyclic distribution family which makes it simple to understand and easy to implement We show how to reduce the com munication cost of the algorithm by a factor of three in the case

  17. A Stress Update Algorithm for Constitutive Models of Glassy Polymers

    Science.gov (United States)

    Danielsson, Mats

    2013-06-01

    A semi-implicit stress update algorithm is developed for the elastic-viscoplastic behavior of glassy polymers. The case of near rate-insensitivity is addressed, and the stress update algorithm is designed to handle this case robustly. A consistent tangent stiffness matrix is derived based on a full linearization of the internal virtual work. The stress update algorithm and (a slightly modified) tangent stiffness matrix are implemented in a commercial finite element program. The stress update algorithm is tested on a large boundary value problem for illustrative purposes.

  18. Genetic algorithm based optimization of advanced solar cell designs modeled in Silvaco AtlasTM

    OpenAIRE

    Utsler, James

    2006-01-01

    A genetic algorithm was used to optimize the power output of multi-junction solar cells. Solar cell operation was modeled using the Silvaco ATLASTM software. The output of the ATLASTM simulation runs served as the input to the genetic algorithm. The genetic algorithm was run as a diffusing computation on a network of eighteen dual processor nodes. Results showed that the genetic algorithm produced better power output optimizations when compared with the results obtained using the hill cli...

  19. A Cost-Effective Tracking Algorithm for Hypersonic Glide Vehicle Maneuver Based on Modified Aerodynamic Model

    Directory of Open Access Journals (Sweden)

    Yu Fan

    2016-10-01

    Full Text Available In order to defend the hypersonic glide vehicle (HGV, a cost-effective single-model tracking algorithm using Cubature Kalman filter (CKF is proposed in this paper based on modified aerodynamic model (MAM as process equation and radar measurement model as measurement equation. In the existing aerodynamic model, the two control variables attack angle and bank angle cannot be measured by the existing radar equipment and their control laws cannot be known by defenders. To establish the process equation, the MAM for HGV tracking is proposed by using additive white noise to model the rates of change of the two control variables. For the ease of comparison several multiple model algorithms based on CKF are presented, including interacting multiple model (IMM algorithm, adaptive grid interacting multiple model (AGIMM algorithm and hybrid grid multiple model (HGMM algorithm. The performances of these algorithms are compared and analyzed according to the simulation results. The simulation results indicate that the proposed tracking algorithm based on modified aerodynamic model has the best tracking performance with the best accuracy and least computational cost among all tracking algorithms in this paper. The proposed algorithm is cost-effective for HGV tracking.

  20. Introducing Elitist Black-Box Models: When Does Elitist Selection Weaken the Performance of Evolutionary Algorithms?

    OpenAIRE

    Doerr, Carola; Lengler, Johannes

    2015-01-01

    Black-box complexity theory provides lower bounds for the runtime of black-box optimizers like evolutionary algorithms and serves as an inspiration for the design of new genetic algorithms. Several black-box models covering different classes of algorithms exist, each highlighting a different aspect of the algorithms under considerations. In this work we add to the existing black-box notions a new \\emph{elitist black-box model}, in which algorithms are required to base all decisions solely on ...

  1. Numerical Algorithms for Deterministic Impulse Control Models with Applications

    NARCIS (Netherlands)

    Grass, D.; Chahim, M.

    2012-01-01

    Abstract: In this paper we describe three different algorithms, from which two (as far as we know) are new in the literature. We take both the size of the jump as the jump times as decision variables. The first (new) algorithm considers an Impulse Control problem as a (multipoint) Boundary Value

  2. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems

  3. Algorithms for Bayesian network modeling and reliability assessment of infrastructure systems

    International Nuclear Information System (INIS)

    Tien, Iris; Der Kiureghian, Armen

    2016-01-01

    Novel algorithms are developed to enable the modeling of large, complex infrastructure systems as Bayesian networks (BNs). These include a compression algorithm that significantly reduces the memory storage required to construct the BN model, and an updating algorithm that performs inference on compressed matrices. These algorithms address one of the major obstacles to widespread use of BNs for system reliability assessment, namely the exponentially increasing amount of information that needs to be stored as the number of components in the system increases. The proposed compression and inference algorithms are described and applied to example systems to investigate their performance compared to that of existing algorithms. Orders of magnitude savings in memory storage requirement are demonstrated using the new algorithms, enabling BN modeling and reliability analysis of larger infrastructure systems. - Highlights: • Novel algorithms developed for Bayesian network modeling of infrastructure systems. • Algorithm presented to compress information in conditional probability tables. • Updating algorithm presented to perform inference on compressed matrices. • Algorithms applied to example systems to investigate their performance. • Orders of magnitude savings in memory storage requirement demonstrated.

  4. Parallel algorithms for interactive manipulation of digital terrain models

    Science.gov (United States)

    Davis, E. W.; Mcallister, D. F.; Nagaraj, V.

    1988-01-01

    Interactive three-dimensional graphics applications, such as terrain data representation and manipulation, require extensive arithmetic processing. Massively parallel machines are attractive for this application since they offer high computational rates, and grid connected architectures provide a natural mapping for grid based terrain models. Presented here are algorithms for data movement on the massive parallel processor (MPP) in support of pan and zoom functions over large data grids. It is an extension of earlier work that demonstrated real-time performance of graphics functions on grids that were equal in size to the physical dimensions of the MPP. When the dimensions of a data grid exceed the processing array size, data is packed in the array memory. Windows of the total data grid are interactively selected for processing. Movement of packed data is needed to distribute items across the array for efficient parallel processing. Execution time for data movement was found to exceed that for arithmetic aspects of graphics functions. Performance figures are given for routines written in MPP Pascal.

  5. Parallelization of the model-based iterative reconstruction algorithm DIRA

    International Nuclear Information System (INIS)

    Oertenberg, A.; Sandborg, M.; Alm Carlsson, G.; Malusek, A.; Magnusson, M.

    2016-01-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelization of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelization of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelized using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelization of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelization with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. (authors)

  6. "Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...

  7. Algorithms for a parallel implementation of Hidden Markov Models with a small state space

    DEFF Research Database (Denmark)

    Nielsen, Jesper; Sand, Andreas

    2011-01-01

    Two of the most important algorithms for Hidden Markov Models are the forward and the Viterbi algorithms. We show how formulating these using linear algebra naturally lends itself to parallelization. Although the obtained algorithms are slow for Hidden Markov Models with large state spaces......, they require very little communication between processors, and are fast in practice on models with a small state space. We have tested our implementation against two other imple- mentations on artificial data and observe a speed-up of roughly a factor of 5 for the forward algorithm and more than 6...... for the Viterbi algorithm. We also tested our algorithm in the Coalescent Hidden Markov Model framework, where it gave a significant speed-up....

  8. Leakage detection algorithm integrating water distribution networks hydraulic model

    CSIR Research Space (South Africa)

    Adedeji, K

    2017-06-01

    Full Text Available and estimation is vital for effective water service. For effective detection of background leakages, a hydraulic analysis of flow characteristics in water piping networks is indispensable for appraising such type of leakage. A leakage detection algorithm...

  9. Performance comparison of genetic algorithms and particle swarm optimization for model integer programming bus timetabling problem

    Science.gov (United States)

    Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.

    2018-03-01

    Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.

  10. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  11. Making the error-controlling algorithm of observable operator models constructive.

    Science.gov (United States)

    Zhao, Ming-Jie; Jaeger, Herbert; Thon, Michael

    2009-12-01

    Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algorithms has been developed, with increasing computational and statistical efficiency, whose recent culmination was the error-controlling (EC) algorithm developed by the first author. The EC algorithm is an iterative, asymptotically correct algorithm that yields (and minimizes) an assured upper bound on the modeling error. The run time is faster by at least one order of magnitude than EM-based HMM learning algorithms and yields significantly more accurate models than the latter. Here we present a significant improvement of the EC algorithm: the constructive error-controlling (CEC) algorithm. CEC inherits from EC the main idea of minimizing an upper bound on the modeling error but is constructive where EC needs iterations. As a consequence, we obtain further gains in learning speed without loss in modeling accuracy.

  12. Parallel Algorithm for Solving TOV Equations for Sequence of Cold and Dense Nuclear Matter Models

    Science.gov (United States)

    Ayriyan, Alexander; Buša, Ján; Grigorian, Hovik; Poghosyan, Gevorg

    2018-04-01

    We have introduced parallel algorithm simulation of neutron star configurations for set of equation of state models. The performance of the parallel algorithm has been investigated for testing set of EoS models on two computational systems. It scales when using with MPI on modern CPUs and this investigation allowed us also to compare two different types of computational nodes.

  13. A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials

    Energy Technology Data Exchange (ETDEWEB)

    Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A

    2008-12-04

    We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.

  14. Mathematical model and coordination algorithms for ensuring complex security of an organization

    Science.gov (United States)

    Novoseltsev, V. I.; Orlova, D. E.; Dubrovin, A. S.; Irkhin, V. P.

    2018-03-01

    The mathematical model of coordination when ensuring complex security of the organization is considered. On the basis of use of a method of casual search three types of algorithms of effective coordination adequate to mismatch level concerning security are developed: a coordination algorithm at domination of instructions of the coordinator; a coordination algorithm at domination of decisions of performers; a coordination algorithm at parity of interests of the coordinator and performers. Assessment of convergence of the algorithms considered above it was made by carrying out a computing experiment. The described algorithms of coordination have property of convergence in the sense stated above. And, the following regularity is revealed: than more simply in the structural relation the algorithm, for the smaller number of iterations is provided to those its convergence.

  15. A Path Planning Algorithm using Generalized Potential Model for Hyper- Redundant Robots with 2-DOF Joints

    Directory of Open Access Journals (Sweden)

    Chien-Chou Lin

    2011-06-01

    Full Text Available This paper proposes a potential‐based path planning algorithm of articulated robots with 2‐DOF joints. The algorithm is an extension of a previous algorithm developed for 3‐DOF joints. While 3‐DOF joints result in a more straightforward potential minimization algorithm, 2‐DOF joints are obviously more practical for active operations. The proposed approach computes repulsive force and torque between charged objects by using generalized potential model. A collision‐free path can be obtained by locally adjusting the robot configuration to search for minimum potential configurations using these force and torque. The optimization of path safeness, through the innovative potential minimization algorithm, makes the proposed approach unique. In order to speedup the computation, a sequential planning strategy is adopted. Simulation results show that the proposed algorithm works well compared with 3‐DOF‐joint algorithm, in terms of collision avoidance and computation efficiency.

  16. A Decomposition Algorithm for Mean-Variance Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP that decompo......This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP...

  17. Algorithms and Methods for High-Performance Model Predictive Control

    DEFF Research Database (Denmark)

    Frison, Gianluca

    routines employed in the numerical tests. The main focus of this thesis is on linear MPC problems. In this thesis, both the algorithms and their implementation are equally important. About the implementation, a novel implementation strategy for the dense linear algebra routines in embedded optimization...... is proposed, aiming at improving the computational performance in case of small matrices. About the algorithms, they are built on top of the proposed linear algebra, and they are tailored to exploit the high-level structure of the MPC problems, with special care on reducing the computational complexity....

  18. A hybrid algorithm and its applications to fuzzy logic modeling of nonlinear systems

    Science.gov (United States)

    Wang, Zhongjun

    System models allow us to simulate and analyze system dynamics efficiently. Most importantly, system models allow us to make prediction about system behaviors and to perform system parametric variation analysis without having to build the actual systems. The fuzzy logic modeling technique has been successfully applied in complex nonlinear system modeling such as unsteady aerodynamics modeling etc. recently. However, the current forward search algorithm to identify fuzzy logic model structures is very time-consuming. It is not unusual to spend several days or even a few weeks in computer CPU time to obtain better nonlinear system model structures by this forward search. Moreover, how to speed up the fuzzy logic model parameter identification process is also challenging when the number of influencing variables of nonlinear systems is large. To solve these problems, a hybrid algorithm for the nonlinear system modeling is proposed, formalized, implemented, and evaluated in this dissertation. By combining the fuzzy logic modeling technique with genetic algorithms, the developed hybrid algorithm is applied to both fuzzy logic model structure identification and model parameter identification. In the model structure identification process, the hybrid algorithm has the ability to find feasible structures more efficiently and effectively than the forward search. In the model parameter identification process (by using Newton gradient descent algorithm), the proposed hybrid algorithm incorporates genetic search algorithm to dynamically select convergence factors. It has the advantages of quick search yet maintains the monotonically convergent properties of the Newton gradient descent algorithm. To evaluate the properties of the developed hybrid algorithm, a nonlinear, unsteady aerodynamic normal force model with a complex system involving fourteen influencing variables is established from flight data. The results show that this hybrid algorithm can identify the aerodynamic

  19. Model predictive control algorithms and their application to a continuous fermenter

    Directory of Open Access Journals (Sweden)

    R. G. SILVA

    1999-06-01

    Full Text Available In many continuous fermentation processes, the control objective is to maximize productivity per unit time. The optimum operational point in the steady state can be obtained by maximizing the productivity rate using feed substrate concentration as the independent variable with the equations of the static model as constraints. In the present study, three model-based control schemes have been developed and implemented for a continuous fermenter. The first method modifies the well-known dynamic matrix control (DMC algorithm by making it adaptive. The other two use nonlinear model predictive control algorithms (NMPC, nonlinear model predictive control for calculation of control actions. The NMPC1 algorithm, which uses orthogonal collocation in finite elements, acted similar to NMPC2, which uses equidistant collocation. These algorithms are compared with DMC. The results obtained show the good performance of nonlinear algorithms.

  20. Development and performance analysis of model-based fault detection and diagnosis algorithm

    International Nuclear Information System (INIS)

    Kim, Jung Taek; Park, Jae Chang; Lee, Jung Woon; Kim, Kyung Youn; Lee, In Soo; Kim, Bong Seok; Kang, Sook In

    2002-05-01

    It is important to note that an effective means to assure the reliability and security for the nuclear power plant is to detect and diagnose the faults (failures) as soon and as accurately as possible. The objective of the project is to develop model-based fault detection and diagnosis algorithm for the pressurized water reactor and evaluate the performance of the developed algorithm. The scope of the work can be classified into two categories. The one is state-space model-based FDD algorithm based on the interacting multiple model (IMM) algorithm. The other is input-output model-based FDD algorithm based on the ART neural network. Extensive computer simulations are carried out to evaluate the performance in terms of speed and accuracy

  1. Analyzing Traffic Problem Model With Graph Theory Algorithms

    OpenAIRE

    Tan, Yong

    2014-01-01

    This paper will contribute to a practical problem, Urban Traffic. We will investigate those features, try to simplify the complexity and formulize this dynamic system. These contents mainly contain how to analyze a decision problem with combinatorial method and graph theory algorithms; how to optimize our strategy to gain a feasible solution through employing other principles of Computer Science.

  2. A face recognition algorithm based on multiple individual discriminative models

    DEFF Research Database (Denmark)

    Fagertun, Jens; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2005-01-01

    Abstract—In this paper, a novel algorithm for facial recognition is proposed. The technique combines the color texture and geometrical configuration provided by face images. Landmarks and pixel intensities are used by Principal Component Analysis and Fisher Linear Discriminant Analysis to associa...... as an accurate and robust tool for facial identification and unknown detection....

  3. Model-based remote sensing algorithms for particulate organic ...

    Indian Academy of Sciences (India)

    PCA algorithms based on the first three, four, and five modes accounted for 90, 95, and 98% of total variance and yielded significant correlations with POC with 2 = 0.89, 0.92, and 0.93. These full waveband approaches provided robust estimates of POC in various water types. Three different analyses (root mean square ...

  4. An Iterative Algorithm to Determine the Dynamic User Equilibrium in a Traffic Simulation Model

    Science.gov (United States)

    Gawron, C.

    An iterative algorithm to determine the dynamic user equilibrium with respect to link costs defined by a traffic simulation model is presented. Each driver's route choice is modeled by a discrete probability distribution which is used to select a route in the simulation. After each simulation run, the probability distribution is adapted to minimize the travel costs. Although the algorithm does not depend on the simulation model, a queuing model is used for performance reasons. The stability of the algorithm is analyzed for a simple example network. As an application example, a dynamic version of Braess's paradox is studied.

  5. Modelling and genetic algorithm based optimisation of inverse supply chain

    Science.gov (United States)

    Bányai, T.

    2009-04-01

    (Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a

  6. Modeling skin collimation using the electron pencil beam redefinition algorithm.

    Science.gov (United States)

    Chi, Pai-Chun M; Hogstrom, Kenneth R; Starkschall, George; Antolak, John A; Boyd, Robert A

    2005-11-01

    Skin collimation is an important tool for electron beam therapy that is used to minimize the penumbra when treating near critical structures, at extended treatment distances, with bolus, or using arc therapy. It is usually made of lead or lead alloy material that conforms to and is placed on patient surface. Presently, commercially available treatment-planning systems lack the ability to model skin collimation and to accurately calculate dose in its presence. The purpose of the present work was to evaluate the use of the pencil beam redefinition algorithm (PBRA) in calculating dose in the presence of skin collimation. Skin collimation was incorporated into the PBRA by terminating the transport of electrons once they enter the skin collimator. Both fixed- and arced-beam dose calculations for arced-beam geometries were evaluated by comparing them with measured dose distributions for 10- and 15-MeV beams. Fixed-beam dose distributions were measured in water at 88-cm source-to-surface distance with an air gap of 32 cm. The 6 x 20-cm2 field (dimensions projected to isocenter) had a 10-mm thick lead collimator placed on the surface of the water with its edge 5 cm inside the field's edge located at +10 cm. Arced-beam dose distributions were measured in a 13.5-cm radius polystyrene circular phantom. The beam was arced 90 degrees (-45 degrees to +45 degrees), and 10-mm thick lead collimation was placed at +/- 30 degrees. For the fixed beam at 10 MeV, the PBRA- calculated dose agreed with measured dose to within 2.0-mm distance to agreement (DTA) in the regions of high-dose gradient and 2.0% in regions of low dose gradient. At 15 MeV, the PBRA agreed to within a 2.0-mm DTA in the regions of high-dose gradient; however, the PBRA underestimated the dose by as much as 5.3% over small regions at depths less than 2 cm because it did not model electrons scattered from the edge of the skin collimation. For arced beams at 10 MeV, the agreement was 1-mm DTA in the high-dose gradient

  7. Study on solitary word based on HMM model and Baum-Welch algorithm

    Directory of Open Access Journals (Sweden)

    Junxia CHEN

    Full Text Available This paper introduces the principle of Hidden Markov Model, which is used to describe the Markov process with unknown parameters, is a probability model to describe the statistical properties of the random process. On this basis, designed a solitary word detection experiment based on HMM model, by optimizing the experimental model, Using Baum-Welch algorithm for training the problem of solving the HMM model, HMM model to estimate the parameters of the λ value is found, in this view of mathematics equivalent to other linear prediction coefficient. This experiment in reducing unnecessary HMM training at the same time, reduced the algorithm complexity. In order to test the effectiveness of the Baum-Welch algorithm, The simulation of experimental data, the results show that the algorithm is effective.

  8. Portfolio optimization by using linear programing models based on genetic algorithm

    Science.gov (United States)

    Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.

    2018-01-01

    In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.

  9. SPICE Modeling and Simulation of a MPPT Algorithm

    Directory of Open Access Journals (Sweden)

    Miona Andrejević Stošović

    2014-06-01

    Full Text Available One among several equally important subsystems of a standalone photovoltaic (PV system is the circuit for maximum power point tracking (MPPT. There are several algorithms that may be used for it. In this paper we choose such an algorithm based on the maximum simplicity criteria. Then we make some small modifications to it in order to make it more robust. We synthesize a circuit built out of elements from the list of elements recognized by SPICE. The inputs are the voltage and the current at the PV panel to DC-DC converter interface. Its task is to generate a pulse width modulated pulse train whose duty ratio is defined to keep the input impedance of the DC-DC converter at the optimal value.

  10. Parameter Optimization of Single-Diode Model of Photovoltaic Cell Using Memetic Algorithm

    Directory of Open Access Journals (Sweden)

    Yourim Yoon

    2015-01-01

    Full Text Available This study proposes a memetic approach for optimally determining the parameter values of single-diode-equivalent solar cell model. The memetic algorithm, which combines metaheuristic and gradient-based techniques, has the merit of good performance in both global and local searches. First, 10 single algorithms were considered including genetic algorithm, simulated annealing, particle swarm optimization, harmony search, differential evolution, cuckoo search, least squares method, and pattern search; then their final solutions were used as initial vectors for generalized reduced gradient technique. From this memetic approach, we could further improve the accuracy of the estimated solar cell parameters when compared with single algorithm approaches.

  11. Event-chain algorithm for the Heisenberg model: Evidence for z≃1 dynamic scaling.

    Science.gov (United States)

    Nishikawa, Yoshihiko; Michel, Manon; Krauth, Werner; Hukushima, Koji

    2015-12-01

    We apply the event-chain Monte Carlo algorithm to the three-dimensional ferromagnetic Heisenberg model. The algorithm is rejection-free and also realizes an irreversible Markov chain that satisfies global balance. The autocorrelation functions of the magnetic susceptibility and the energy indicate a dynamical critical exponent z≈1 at the critical temperature, while that of the magnetization does not measure the performance of the algorithm. We show that the event-chain Monte Carlo algorithm substantially reduces the dynamical critical exponent from the conventional value of z≃2.

  12. Atmosphere Clouds Model Algorithm for Solving Optimal Reactive Power Dispatch Problem

    Directory of Open Access Journals (Sweden)

    Lenin Kanagasabai

    2014-04-01

    Full Text Available In this paper, a new method, called Atmosphere Clouds Model (ACM algorithm, used for solving optimal reactive power dispatch problem. ACM stochastic optimization algorithm stimulated from the behavior of cloud in the natural earth. ACM replicate the generation behavior, shift behavior and extend behavior of cloud. The projected (ACM algorithm has been tested on standard IEEE 30 bus test system and simulation results shows clearly about the superior performance of the proposed algorithm in plummeting the real power loss. Normal 0 false false false EN-IN X-NONE X-NONE

  13. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    Science.gov (United States)

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…

  14. Inferring the structure of latent class models using a genetic algorithm

    NARCIS (Netherlands)

    van der Maas, H.L.J.; Raijmakers, M.E.J.; Visser, I.

    2005-01-01

    Present optimization techniques in latent class analysis apply the expectation maximization algorithm or the Newton-Raphson algorithm for optimizing the parameter values of a prespecified model. These techniques can be used to find maximum likelihood estimates of the parameters, given the specified

  15. Global Convergence of the EM Algorithm for Unconstrained Latent Variable Models with Categorical Indicators

    Science.gov (United States)

    Weissman, Alexander

    2013-01-01

    Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…

  16. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    Science.gov (United States)

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  17. Efficient cache oblivious algorithms for randomized divide-and-conquer on the multicore model

    OpenAIRE

    Sharma, Neeraj; Sen, Sandeep

    2012-01-01

    In this paper we present randomized algorithms for sorting and convex hull that achieves optimal performance (for speed-up and cache misses) on the multicore model with private cache model. Our algorithms are cache oblivious and generalize the randomized divide and conquer strategy given by Reischuk and Reif and Sen. Although the approach yielded optimal speed-up in the PRAM model, we require additional techniques to optimize cache-misses in an oblivious setting. Under a mild assumption on in...

  18. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    Science.gov (United States)

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  19. Optimal parallel algorithms for problems modeled by a family of intervals

    Science.gov (United States)

    Olariu, Stephan; Schwing, James L.; Zhang, Jingyuan

    1992-01-01

    A family of intervals on the real line provides a natural model for a vast number of scheduling and VLSI problems. Recently, a number of parallel algorithms to solve a variety of practical problems on such a family of intervals have been proposed in the literature. Computational tools are developed, and it is shown how they can be used for the purpose of devising cost-optimal parallel algorithms for a number of interval-related problems including finding a largest subset of pairwise nonoverlapping intervals, a minimum dominating subset of intervals, along with algorithms to compute the shortest path between a pair of intervals and, based on the shortest path, a parallel algorithm to find the center of the family of intervals. More precisely, with an arbitrary family of n intervals as input, all algorithms run in O(log n) time using O(n) processors in the EREW-PRAM model of computation.

  20. TWO-STEP ALGORITHM OF TRAINING INITIALIZATION FOR ACOUSTIC MODELS BASED ON DEEP NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    I. P. Medennikov

    2016-03-01

    Full Text Available This paper presents a two-step initialization algorithm for training of acoustic models based on deep neural networks. The algorithm is focused on reducing the impact of the non-speech segments on the acoustic model training. The idea of the proposed algorithm is to reduce the percentage of non-speech examples in the training set. Effectiveness evaluation of the algorithm has been carried out on the example of English spontaneous telephone speech recognition (Switchboard. The application of the proposed algorithm has led to 3% relative word error rate reduction, compared with the training initialization by restricted Boltzmann machines. The results presented in the paper can be applied in the development of automatic speech recognition systems.

  1. Normative data for modified Thorington phorias and prism bar vergences from the Benton-IU study.

    Science.gov (United States)

    Lyon, Don W; Goss, David A; Horner, Douglas; Downey, John P; Rainey, Bill

    2005-10-01

    The use of a phoropter for measuring phorias and vergences in children is common in the optometric profession. For young children, the use of the phoropter can be confusing, making it difficult to obtain accurate measurements. Free space testing allows for direct observation of the eyes in a natural environment and is easier for children to understand the directions. The normal values for phorias and vergences used with children are derived from testing with a phoropter or free space measurements with mostly adult patients. The Benton-IU Project was a large multidisciplinary study of factors affecting school performance conducted by the Indiana University School of Optometry and the Indiana University Department of Speech and Hearing with the cooperation of the Benton Community School Corporation (Benton County, Indiana). This project allowed the authors to obtain data on modified Thorington phorias and prism bar vergences from a nonselected group of first and fourth graders as part of an eye/vision examination. In this report, central tendency and variability statistics for modified Thorington and prism bar vergences are reported based on the data from the Benton-IU Study. The data presented in this report can be used by optometrists when deciding if the patient's phorias and vergences are within normal limits for children in the first through fourth grades.

  2. A self-organizing algorithm for modeling protein loops.

    Directory of Open Access Journals (Sweden)

    Pu Liu

    2009-08-01

    Full Text Available Protein loops, the flexible short segments connecting two stable secondary structural units in proteins, play a critical role in protein structure and function. Constructing chemically sensible conformations of protein loops that seamlessly bridge the gap between the anchor points without introducing any steric collisions remains an open challenge. A variety of algorithms have been developed to tackle the loop closure problem, ranging from inverse kinematics to knowledge-based approaches that utilize pre-existing fragments extracted from known protein structures. However, many of these approaches focus on the generation of conformations that mainly satisfy the fixed end point condition, leaving the steric constraints to be resolved in subsequent post-processing steps. In the present work, we describe a simple solution that simultaneously satisfies not only the end point and steric conditions, but also chirality and planarity constraints. Starting from random initial atomic coordinates, each individual conformation is generated independently by using a simple alternating scheme of pairwise distance adjustments of randomly chosen atoms, followed by fast geometric matching of the conformationally rigid components of the constituent amino acids. The method is conceptually simple, numerically stable and computationally efficient. Very importantly, additional constraints, such as those derived from NMR experiments, hydrogen bonds or salt bridges, can be incorporated into the algorithm in a straightforward and inexpensive way, making the method ideal for solving more complex multi-loop problems. The remarkable performance and robustness of the algorithm are demonstrated on a set of protein loops of length 4, 8, and 12 that have been used in previous studies.

  3. Behavioral Modeling for Mental Health using Machine Learning Algorithms.

    Science.gov (United States)

    Srividya, M; Mohanavalli, S; Bhalaji, N

    2018-04-03

    Mental health is an indicator of emotional, psychological and social well-being of an individual. It determines how an individual thinks, feels and handle situations. Positive mental health helps one to work productively and realize their full potential. Mental health is important at every stage of life, from childhood and adolescence through adulthood. Many factors contribute to mental health problems which lead to mental illness like stress, social anxiety, depression, obsessive compulsive disorder, drug addiction, and personality disorders. It is becoming increasingly important to determine the onset of the mental illness to maintain proper life balance. The nature of machine learning algorithms and Artificial Intelligence (AI) can be fully harnessed for predicting the onset of mental illness. Such applications when implemented in real time will benefit the society by serving as a monitoring tool for individuals with deviant behavior. This research work proposes to apply various machine learning algorithms such as support vector machines, decision trees, naïve bayes classifier, K-nearest neighbor classifier and logistic regression to identify state of mental health in a target group. The responses obtained from the target group for the designed questionnaire were first subject to unsupervised learning techniques. The labels obtained as a result of clustering were validated by computing the Mean Opinion Score. These cluster labels were then used to build classifiers to predict the mental health of an individual. Population from various groups like high school students, college students and working professionals were considered as target groups. The research presents an analysis of applying the aforementioned machine learning algorithms on the target groups and also suggests directions for future work.

  4. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  5. An improved algorithm to convert CAD model to MCNP geometry model based on STEP file

    International Nuclear Information System (INIS)

    Zhou, Qingguo; Yang, Jiaming; Wu, Jiong; Tian, Yanshan; Wang, Junqiong; Jiang, Hai; Li, Kuan-Ching

    2015-01-01

    Highlights: • Fully exploits common features of cells, making the processing efficient. • Accurately provide the cell position. • Flexible to add new parameters in the structure. • Application of novel structure in INP file processing, conveniently evaluate cell location. - Abstract: MCNP (Monte Carlo N-Particle Transport Code) is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron, or coupled neutron/photon/electron transport. Its input file, the INP file, has the characteristics of complicated form and is error-prone when describing geometric models. Due to this, a conversion algorithm that can solve the problem by converting general geometric model to MCNP model during MCNP aided modeling is highly needed. In this paper, we revised and incorporated a number of improvements over our previous work (Yang et al., 2013), which was proposed and targeted after STEP file and INP file were analyzed. Results of experiments show that the revised algorithm is more applicable and efficient than previous work, with the optimized extraction of geometry and topology information of the STEP file, as well as the production efficiency of output INP file. This proposed research is promising, and serves as valuable reference for the majority of researchers involved with MCNP-related researches

  6. Improved Expectation Maximization Algorithm for Gaussian Mixed Model Using the Kernel Method

    Directory of Open Access Journals (Sweden)

    Mohd Izhan Mohd Yusoff

    2013-01-01

    Full Text Available Fraud activities have contributed to heavy losses suffered by telecommunication companies. In this paper, we attempt to use Gaussian mixed model, which is a probabilistic model normally used in speech recognition to identify fraud calls in the telecommunication industry. We look at several issues encountered when calculating the maximum likelihood estimates of the Gaussian mixed model using an Expectation Maximization algorithm. Firstly, we look at a mechanism for the determination of the initial number of Gaussian components and the choice of the initial values of the algorithm using the kernel method. We show via simulation that the technique improves the performance of the algorithm. Secondly, we developed a procedure for determining the order of the Gaussian mixed model using the log-likelihood function and the Akaike information criteria. Finally, for illustration, we apply the improved algorithm to real telecommunication data. The modified method will pave the way to introduce a comprehensive method for detecting fraud calls in future work.

  7. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, Denis

    2017-04-05

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.

  8. State-space models - from the EM algorithm to a gradient approach

    DEFF Research Database (Denmark)

    Olsson, Rasmus Kongsgaard; Petersen, Kaare Brandt; Lehn-Schiøler, Tue

    2007-01-01

    Slow convergence is observed in the EM algorithm for linear state-space models. We propose to circumvent the problem by applying any off-the-shelf quasi-Newton-type optimizer, which operates on the gradient of the log-likelihood function. Such an algorithm is a practical alternative due to the fact...... that the exact gradient of the log-likelihood function can be computed by recycling components of the expectation-maximization (EM) algorithm. We demonstrate the efficiency of the proposed method in three relevant instances of the linear state-space model. In high signal-to-noise ratios, where EM is particularly...

  9. Filtering Based Recursive Least Squares Algorithm for Multi-Input Multioutput Hammerstein Models

    Directory of Open Access Journals (Sweden)

    Ziyun Wang

    2014-01-01

    Full Text Available This paper considers the parameter estimation problem for Hammerstein multi-input multioutput finite impulse response (FIR-MA systems. Filtered by the noise transfer function, the FIR-MA model is transformed into a controlled autoregressive model. The key-term variable separation principle is used to derive a data filtering based recursive least squares algorithm. The numerical examples confirm that the proposed algorithm can estimate parameters more accurately and has a higher computational efficiency compared with the recursive least squares algorithm.

  10. Filtering Based Recursive Least Squares Algorithm for Multi-Input Multioutput Hammerstein Models

    OpenAIRE

    Wang, Ziyun; Wang, Yan; Ji, Zhicheng

    2014-01-01

    This paper considers the parameter estimation problem for Hammerstein multi-input multioutput finite impulse response (FIR-MA) systems. Filtered by the noise transfer function, the FIR-MA model is transformed into a controlled autoregressive model. The key-term variable separation principle is used to derive a data filtering based recursive least squares algorithm. The numerical examples confirm that the proposed algorithm can estimate parameters more accurately and has a higher computational...

  11. Genetic Algorithms for a Parameter Estimation of a Fermentation Process Model: A Comparison

    Directory of Open Access Journals (Sweden)

    Olympia Roeva

    2005-12-01

    Full Text Available In this paper the problem of a parameter estimation using genetic algorithms is examined. A case study considering the estimation of 6 parameters of a nonlinear dynamic model of E. coli fermentation is presented as a test problem. The parameter estimation problem is stated as a nonlinear programming problem subject to nonlinear differential-algebraic constraints. This problem is known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based local optimization methods fail to arrive satisfied solutions. To overcome their limitations, the use of different genetic algorithms as stochastic global optimization methods is explored. These algorithms are proved to be very suitable for the optimization of highly non-linear problems with many variables. Genetic algorithms can guarantee global optimality and robustness. These facts make them advantageous in use for parameter identification of fermentation models. A comparison between simple, modified and multi-population genetic algorithms is presented. The best result is obtained using the modified genetic algorithm. The considered algorithms converged very closely to the cost value but the modified algorithm is in times faster than other two.

  12. Prediction models and control algorithms for predictive applications of setback temperature in cooling systems

    International Nuclear Information System (INIS)

    Moon, Jin Woo; Yoon, Younju; Jeon, Young-Hoon; Kim, Sooyoung

    2017-01-01

    Highlights: • Initial ANN model was developed for predicting the time to the setback temperature. • Initial model was optimized for producing accurate output. • Optimized model proved its prediction accuracy. • ANN-based algorithms were developed and tested their performance. • ANN-based algorithms presented superior thermal comfort or energy efficiency. - Abstract: In this study, a temperature control algorithm was developed to apply a setback temperature predictively for the cooling system of a residential building during occupied periods by residents. An artificial neural network (ANN) model was developed to determine the required time for increasing the current indoor temperature to the setback temperature. This study involved three phases: development of the initial ANN-based prediction model, optimization and testing of the initial model, and development and testing of three control algorithms. The development and performance testing of the model and algorithm were conducted using TRNSYS and MATLAB. Through the development and optimization process, the final ANN model employed indoor temperature and the temperature difference between the current and target setback temperature as two input neurons. The optimal number of hidden layers, number of neurons, learning rate, and moment were determined to be 4, 9, 0.6, and 0.9, respectively. The tangent–sigmoid and pure-linear transfer function was used in the hidden and output neurons, respectively. The ANN model used 100 training data sets with sliding-window method for data management. Levenberg-Marquart training method was employed for model training. The optimized model had a prediction accuracy of 0.9097 root mean square errors when compared with the simulated results. Employing the ANN model, ANN-based algorithms maintained indoor temperatures better within target ranges. Compared to the conventional algorithm, the ANN-based algorithms reduced the duration of time, in which the indoor temperature

  13. Development and Pre-Clinical Evaluation of a Novel Prostate-Restricted Replication Competent Adenovirus-Ad-IU-1

    National Research Council Canada - National Science Library

    Gardner, Thomas A

    2005-01-01

    .... The goal of this research is to develop a novel therapeutic agent, Ad-IU-1, using PSES to control the replication of adenovirus and the expression of a therapeutic gene, herpes simplex thymidine kinase (TK...

  14. Development and Pre-Clinical Evaluation of a Novel Prostate-Restricted Replication Competent Adenovirus-AD-IU-1

    National Research Council Canada - National Science Library

    Gardner, Thomas A

    2006-01-01

    ... independent prostate cancers. The goal of this research is to develop a novel therapeutic agent, Ad-IU-1, using PSES to control the replication of adenovirus and the expression of a therapeutic gene, herpes simplex thymidine kinase (TK...

  15. Prevention of Vitamin D deficiency in infancy: daily 400 IU vitamin D is sufficient

    Directory of Open Access Journals (Sweden)

    Cizmecioglu Filiz M

    2011-06-01

    Full Text Available Summary Aim-objective Vitamin D deficiency and rickets in developing countries continues to be a major health problem. Additionally, the increase of cases of rickets in children of some ethnic groups in the United States and European countries has provided this issue to be updated. Obviously, powerful strategies are necessary to prevent vitamin D deficiency nation-wide. In 2005, a nationwide prevention program for vitamin D deficiency was initiated, recommending 400 IU vitamin D per a day. This study was designed to evaluate the efficacy of the prevention program. Methods Eighty-five infants who were recalled as part of the national screening program for congenital hypothyroidism between February 2010 and August 2010 at Kocaeli University Children's Hospital were evaluated in terms of their vitamin D status as well. All babies had been provided with free vitamin D (Cholecalciferol solution and recommended to receive 400 IU (3 drops daily. Information regarding the age at start of supplementation, the dosage and compliance were obtained from the mothers with face-to-face interview. Serum 25-hydroxy vitamin D (25-OH-D, alkaline phosphatase (AP, parathormone (PTH levels were measured. Results The mean age at which Vitamin D3 supplementation began was 16.5 ± 20.7 (3-120 days. Ninety percent of cases (n:76 were receiving 3 drops (400 IU vitamin D3 per day as recommended; 70% of cases (n:59 were given vitamin D3 regularly, the remainder had imperfect compliance. Among those children who are older than 12 months, only 20% continued vitamin D supplementation. No subject had clinical signs of rickets. The mean 25-OH-D level was 42,5 ± 25,8 (median: 38.3 ng/ml. Ten subjects (12% had their serum 25-OH-D levels lower than 20 ng/ml (6 between 15-20 ng/ml, 3 between 5-15 ng/ml and only one Conclusions 400 U/day vitamin D seems adequate to prevent vitamin D deficiency. However, we believe that the program for preventing vitamin D deficiency in Turkey, needs

  16. Algorithms for Hidden Markov Models Restricted to Occurrences of Regular Expressions

    DEFF Research Database (Denmark)

    Tataru, Paula; Sand, Andreas; Hobolth, Asger

    2013-01-01

    Hidden Markov Models (HMMs) are widely used probabilistic models, particularly for annotating sequential data with an underlying hidden structure. Patterns in the annotation are often more relevant to study than the hidden structure itself. A typical HMM analysis consists of annotating the observed...... data using a decoding algorithm and analyzing the annotation to study patterns of interest. For example, given an HMM modeling genes in DNA sequences, the focus is on occurrences of genes in the annotation. In this paper, we define a pattern through a regular expression and present a restriction...... of three classical algorithms to take the number of occurrences of the pattern in the hidden sequence into account. We present a new algorithm to compute the distribution of the number of pattern occurrences, and we extend the two most widely used existing decoding algorithms to employ information from...

  17. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    Science.gov (United States)

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  18. Interpolation Algorithm and Mathematical Model in Automated Welding of Saddle-Shaped Weld

    Directory of Open Access Journals (Sweden)

    Lianghao Xue

    2018-01-01

    Full Text Available This paper presents welding torch pose model and interpolation algorithm of trajectory control of saddle-shaped weld formed by intersection of two pipes; the working principle, interpolation algorithm, welding experiment, and simulation result of the automatic welding system of the saddle-shaped weld are described. A variable angle interpolation method is used to control the trajectory and pose of the welding torch, which guarantees the constant linear terminal velocity. The mathematical model of the trajectory and pose of welding torch are established. Simulation and experiment have been carried out to verify the effectiveness of the proposed algorithm and mathematical model. The results demonstrate that the interpolation algorithm is well within the interpolation requirements of the saddle-shaped weld and ideal feed rate stability.

  19. Programming Non-Trivial Algorithms in the Measurement Based Quantum Computation Model

    Energy Technology Data Exchange (ETDEWEB)

    Alsing, Paul [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Fanto, Michael [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Lott, Capt. Gordon [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Tison, Christoper C. [United States Air Force Research Laboratory, Wright-Patterson Air Force Base

    2014-01-01

    We provide a set of prescriptions for implementing a quantum circuit model algorithm as measurement based quantum computing (MBQC) algorithm1, 2 via a large cluster state. As means of illustration we draw upon our numerical modeling experience to describe a large graph state capable of searching a logical 8 element list (a non-trivial version of Grover's algorithm3 with feedforward). We develop several prescriptions based on analytic evaluation of cluster states and graph state equations which can be generalized into any circuit model operations. Such a resulting cluster state will be able to carry out the desired operation with appropriate measurements and feed forward error correction. We also discuss the physical implementation and the analysis of the principal 3-qubit entangling gate (Toffoli) required for a non-trivial feedforward realization of an 8-element Grover search algorithm.

  20. Using an Improved Artificial Bee Colony Algorithm for Parameter Estimation of a Dynamic Grain Flow Model

    Directory of Open Access Journals (Sweden)

    He Wang

    2018-01-01

    Full Text Available An effective method is proposed to estimate the parameters of a dynamic grain flow model (DGFM. To this end, an improved artificial bee colony (IABC algorithm is used to estimate unknown parameters of DGFM with minimizing a given objective function. A comparative study of the performance of the IABC algorithm and the other ABC variants on several benchmark functions is carried out, and the results present a significant improvement in performance over the other ABC variants. The practical application performance of the IABC is compared to that of the nonlinear least squares (NLS, particle swarm optimization (PSO, and genetic algorithm (GA. The compared results demonstrate that IABC algorithm is more accurate and effective for the parameter estimation of DGFM than the other algorithms.

  1. PM Synchronous Motor Dynamic Modeling with Genetic Algorithm ...

    African Journals Online (AJOL)

    Adel

    This paper proposes dynamic modeling simulation for ac Surface Permanent Magnet Synchronous Motor (SPMSM) with the aid of MATLAB – Simulink environment. The proposed model would be used in many applications such as automotive, mechatronics, green energy applications, and machine drives. The modeling ...

  2. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  3. Comparative Study on a Solving Model and Algorithm for a Flush Air Data Sensing System

    Directory of Open Access Journals (Sweden)

    Yanbin Liu

    2014-05-01

    Full Text Available With the development of high-performance aircraft, precise air data are necessary to complete challenging tasks such as flight maneuvering with large angles of attack and high speed. As a result, the flush air data sensing system (FADS was developed to satisfy the stricter control demands. In this paper, comparative stuides on the solving model and algorithm for FADS are conducted. First, the basic principles of FADS are given to elucidate the nonlinear relations between the inputs and the outputs. Then, several different solving models and algorithms of FADS are provided to compute the air data, including the angle of attck, sideslip angle, dynamic pressure and static pressure. Afterwards, the evaluation criteria of the resulting models and algorithms are discussed to satisfy the real design demands. Futhermore, a simulation using these algorithms is performed to identify the properites of the distinct models and algorithms such as the measuring precision and real-time features. The advantages of these models and algorithms corresponding to the different flight conditions are also analyzed, furthermore, some suggestions on their engineering applications are proposed to help future research.

  4. Optimizing the Forward Algorithm for Hidden Markov Model on IBM Roadrunner clusters

    Directory of Open Access Journals (Sweden)

    SOIMAN, S.-I.

    2015-05-01

    Full Text Available In this paper we present a parallel solution of the Forward Algorithm for Hidden Markov Models. The Forward algorithm compute a probability of a hidden state from Markov model at a certain time, this process being recursively. The whole process requires large computational resources for those models with a large number of states and long observation sequences. Our solution in order to reduce the computational time is a multilevel parallelization of Forward algorithm. Two types of cores were used in our implementation, for each level of parallelization, cores that are graved on the same chip of PowerXCell8i processor. This hybrid architecture of processors permitted us to obtain a speedup factor over 40 relative to the sequential algorithm for a model with 24 states and 25 millions of observable symbols. Experimental results showed that the parallel Forward algorithm can evaluate the probability of an observation sequence on a hidden Markov model 40 times faster than the classic one does. Based on the performance obtained, we demonstrate the applicability of this parallel implementation of Forward algorithm in complex problems such as large vocabulary speech recognition.

  5. A valence force field-Monte Carlo algorithm for quantum dot growth modeling

    DEFF Research Database (Denmark)

    Barettin, Daniele; Kadkhodazadeh, Shima; Pecchia, Alessandro

    2017-01-01

    We present a novel kinetic Monte Carlo version for the atomistic valence force fields algorithm in order to model a self-assembled quantum dot growth process. We show our atomistic model is both computationally favorable and capture more details compared to traditional kinetic Monte Carlo models...

  6. The Rasch Poisson Counts Model for Incomplete Data: An Application of the EM Algorithm.

    Science.gov (United States)

    Jansen, Margo G. H.

    1995-01-01

    The Rasch Poisson counts model is a latent trait model for the situation in which "K" tests are administered to "N" examinees and the test score is a count (repeated number of some event). A mixed model is presented that applies the EM algorithm and that can allow for missing data. (SLD)

  7. Development of Improved Algorithms and Multiscale Modeling Capability with SUNTANS

    Science.gov (United States)

    2015-09-30

    wind-and thermohaline -forced isopycnic coordinate model of the North Atlantic. J. Phys. Oceanogr. 22, 1486–1505. Bleck, R., 2002. An oceanic general... circulation model framed in hybrid isopycnic-Cartesian coordinates. Ocean Modell. 4, 55–88. Buijsman, M.C., Kanarska, Y., McWilliams, J.C., 2010...continental margin. Cont. Shelf Res. 24 (6), 693–720. Nakayama, K. and Imberger, J. 2010 Residual circulation due to internal waves shoaling on a slope

  8. The PX-EM algorithm for fast stable fitting of Henderson's mixed model

    Directory of Open Access Journals (Sweden)

    Van Dyk David A

    2000-03-01

    Full Text Available Abstract This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence are obtained for PX-EM relative to the basic EM algorithm in the random regression.

  9. Turning Simulation into Estimation: Generalized Exchange Algorithms for Exponential Family Models.

    Directory of Open Access Journals (Sweden)

    Maarten Marsman

    Full Text Available The Single Variable Exchange algorithm is based on a simple idea; any model that can be simulated can be estimated by producing draws from the posterior distribution. We build on this simple idea by framing the Exchange algorithm as a mixture of Metropolis transition kernels and propose strategies that automatically select the more efficient transition kernels. In this manner we achieve significant improvements in convergence rate and autocorrelation of the Markov chain without relying on more than being able to simulate from the model. Our focus will be on statistical models in the Exponential Family and use two simple models from educational measurement to illustrate the contribution.

  10. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    International Nuclear Information System (INIS)

    Liang, Zhong Wei; Wang, Yi Jun; Ye, Bang Yan; Brauwer, Richard Kars

    2012-01-01

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process

  11. An Evolutionary Algorithm for Multiobjective Fuzzy Portfolio Selection Models with Transaction Cost and Liquidity

    Directory of Open Access Journals (Sweden)

    Wei Yue

    2015-01-01

    Full Text Available The major issues for mean-variance-skewness models are the errors in estimations that cause corner solutions and low diversity in the portfolio. In this paper, a multiobjective fuzzy portfolio selection model with transaction cost and liquidity is proposed to maintain the diversity of portfolio. In addition, we have designed a multiobjective evolutionary algorithm based on decomposition of the objective space to maintain the diversity of obtained solutions. The algorithm is used to obtain a set of Pareto-optimal portfolios with good diversity and convergence. To demonstrate the effectiveness of the proposed model and algorithm, the performance of the proposed algorithm is compared with the classic MOEA/D and NSGA-II through some numerical examples based on the data of the Shanghai Stock Exchange Market. Simulation results show that our proposed algorithm is able to obtain better diversity and more evenly distributed Pareto front than the other two algorithms and the proposed model can maintain quite well the diversity of portfolio. The purpose of this paper is to deal with portfolio problems in the weighted possibilistic mean-variance-skewness (MVS and possibilistic mean-variance-skewness-entropy (MVS-E frameworks with transaction cost and liquidity and to provide different Pareto-optimal investment strategies as diversified as possible for investors at a time, rather than one strategy for investors at a time.

  12. Channel Parameter Estimation for Scatter Cluster Model Using Modified MUSIC Algorithm

    Directory of Open Access Journals (Sweden)

    Jinsheng Yang

    2012-01-01

    Full Text Available Recently, the scatter cluster models which precisely evaluate the performance of the wireless communication system have been proposed in the literature. However, the conventional SAGE algorithm does not work for these scatter cluster-based models because it performs poorly when the transmit signals are highly correlated. In this paper, we estimate the time of arrival (TOA, the direction of arrival (DOA, and Doppler frequency for scatter cluster model by the modified multiple signal classification (MUSIC algorithm. Using the space-time characteristics of the multiray channel, the proposed algorithm combines the temporal filtering techniques and the spatial smoothing techniques to isolate and estimate the incoming rays. The simulation results indicated that the proposed algorithm has lower complexity and is less time-consuming in the dense multipath environment than SAGE algorithm. Furthermore, the estimations’ performance increases with elements of receive array and samples length. Thus, the problem of the channel parameter estimation of the scatter cluster model can be effectively addressed with the proposed modified MUSIC algorithm.

  13. Convergence analysis of the alternating RGLS algorithm for the identification of the reduced complexity Volterra model.

    Science.gov (United States)

    Laamiri, Imen; Khouaja, Anis; Messaoud, Hassani

    2015-03-01

    In this paper we provide a convergence analysis of the alternating RGLS (Recursive Generalized Least Square) algorithm used for the identification of the reduced complexity Volterra model describing stochastic non-linear systems. The reduced Volterra model used is the 3rd order SVD-PARAFC-Volterra model provided using the Singular Value Decomposition (SVD) and the Parallel Factor (PARAFAC) tensor decomposition of the quadratic and the cubic kernels respectively of the classical Volterra model. The Alternating RGLS (ARGLS) algorithm consists on the execution of the classical RGLS algorithm in alternating way. The ARGLS convergence was proved using the Ordinary Differential Equation (ODE) method. It is noted that the algorithm convergence canno׳t be ensured when the disturbance acting on the system to be identified has specific features. The ARGLS algorithm is tested in simulations on a numerical example by satisfying the determined convergence conditions. To raise the elegies of the proposed algorithm, we proceed to its comparison with the classical Alternating Recursive Least Squares (ARLS) presented in the literature. The comparison has been built on a non-linear satellite channel and a benchmark system CSTR (Continuous Stirred Tank Reactor). Moreover the efficiency of the proposed identification approach is proved on an experimental Communicating Two Tank system (CTTS). Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  14. PRESS-based EFOR algorithm for the dynamic parametrical modeling of nonlinear MDOF systems

    Science.gov (United States)

    Liu, Haopeng; Zhu, Yunpeng; Luo, Zhong; Han, Qingkai

    2017-09-01

    In response to the identification problem concerning multi-degree of freedom (MDOF) nonlinear systems, this study presents the extended forward orthogonal regression (EFOR) based on predicted residual sums of squares (PRESS) to construct a nonlinear dynamic parametrical model. The proposed parametrical model is based on the non-linear autoregressive with exogenous inputs (NARX) model and aims to explicitly reveal the physical design parameters of the system. The PRESS-based EFOR algorithm is proposed to identify such a model for MDOF systems. By using the algorithm, we built a common-structured model based on the fundamental concept of evaluating its generalization capability through cross-validation. The resulting model aims to prevent over-fitting with poor generalization performance caused by the average error reduction ratio (AERR)-based EFOR algorithm. Then, a functional relationship is established between the coefficients of the terms and the design parameters of the unified model. Moreover, a 5-DOF nonlinear system is taken as a case to illustrate the modeling of the proposed algorithm. Finally, a dynamic parametrical model of a cantilever beam is constructed from experimental data. Results indicate that the dynamic parametrical model of nonlinear systems, which depends on the PRESS-based EFOR, can accurately predict the output response, thus providing a theoretical basis for the optimal design of modeling methods for MDOF nonlinear systems.

  15. Trans gene regulation in adaptive evolution: a genetic algorithm model.

    Science.gov (United States)

    Behera, N; Nanjundiah, V

    1997-09-21

    This is a continuation of earlier studies on the evolution of infinite populations of haploid genotypes within a genetic algorithm framework. We had previously explored the evolutionary consequences of the existence of indeterminate-"plastic"-loci, where a plastic locus had a finite probability in each generation of functioning (being switched "on") or not functioning (being switched "off"). The relative probabilities of the two outcomes were assigned on a stochastic basis. The present paper examines what happens when the transition probabilities are biased by the presence of regulatory genes. We find that under certain conditions regulatory genes can improve the adaptation of the population and speed up the rate of evolution (on occasion at the cost of lowering the degree of adaptation). Also, the existence of regulatory loci potentiates selection in favour of plasticity. There is a synergistic effect of regulatory genes on plastic alleles: the frequency of such alleles increases when regulatory loci are present. Thus, phenotypic selection alone can be a potentiating factor in a favour of better adaptation. Copyright 1997 Academic Press Limited.

  16. MIP Models and Hybrid Algorithms for Simultaneous Job Splitting and Scheduling on Unrelated Parallel Machines

    Science.gov (United States)

    Ozmutlu, H. Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204

  17. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    Science.gov (United States)

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  18. Multilevel Analysis of Structural Equation Models via the EM Algorithm.

    Science.gov (United States)

    Jo, See-Heyon

    The question of how to analyze unbalanced hierarchical data generated from structural equation models has been a common problem for researchers and analysts. Among difficulties plaguing statistical modeling are estimation bias due to measurement error and the estimation of the effects of the individual's hierarchical social milieu. This paper…

  19. Optimisation of Hidden Markov Model using Baum–Welch algorithm ...

    Indian Academy of Sciences (India)

    new model λ is obtained, which is more likely than model λ, producing observation sequence. O. This process of re-estimation is continued till no improvement in the probability of observation sequence reached. 4. Results and discussion. HMMs have been developed for prediction of maximum and minimum temperatures in ...

  20. Application of Parallel Algorithms in an Air Pollution Model

    DEFF Research Database (Denmark)

    Georgiev, K.; Zlatev, Z.

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  1. Real-time slicing algorithm for Stereolithography (STL) CAD model applied in additive manufacturing industry

    Science.gov (United States)

    Adnan, F. A.; Romlay, F. R. M.; Shafiq, M.

    2018-04-01

    Owing to the advent of the industrial revolution 4.0, the need for further evaluating processes applied in the additive manufacturing application particularly the computational process for slicing is non-trivial. This paper evaluates a real-time slicing algorithm for slicing an STL formatted computer-aided design (CAD). A line-plane intersection equation was applied to perform the slicing procedure at any given height. The application of this algorithm has found to provide a better computational time regardless the number of facet in the STL model. The performance of this algorithm is evaluated by comparing the results of the computational time for different geometry.

  2. System convergence in transport models: algorithms efficiency and output uncertainty

    DEFF Research Database (Denmark)

    Rich, Jeppe; Nielsen, Otto Anker

    2015-01-01

    of this paper is to analyse convergence performance for the external loop and to illustrate how an improper linkage between the converging parts can lead to substantial uncertainty in the final output. Although this loop is crucial for the performance of large-scale transport models it has not been analysed......-scale in the Danish National Transport Model (DNTM). It is revealed that system convergence requires that either demand or supply is without random noise but not both. In that case, if MSA is applied to the model output with random noise, it will converge effectively as the random effects are gradually dampened...... in the MSA process. In connection to DNTM it is shown that MSA works well when applied to travel-time averaging, whereas trip averaging is generally infected by random noise resulting from the assignment model. The latter implies that the minimum uncertainty in the final model output is dictated...

  3. An efficient voice activity detection algorithm by combining statistical model and energy detection

    Science.gov (United States)

    Wu, Ji; Zhang, Xiao-Lei

    2011-12-01

    In this article, we present a new voice activity detection (VAD) algorithm that is based on statistical models and empirical rule-based energy detection algorithm. Specifically, it needs two steps to separate speech segments from background noise. For the first step, the VAD detects possible speech endpoints efficiently using the empirical rule-based energy detection algorithm. However, the possible endpoints are not accurate enough when the signal-to-noise ratio is low. Therefore, for the second step, we propose a new gaussian mixture model-based multiple-observation log likelihood ratio algorithm to align the endpoints to their optimal positions. Several experiments are conducted to evaluate the proposed VAD on both accuracy and efficiency. The results show that it could achieve better performance than the six referenced VADs in various noise scenarios.

  4. An efficient voice activity detection algorithm by combining statistical model and energy detection

    Directory of Open Access Journals (Sweden)

    Wu Ji

    2011-01-01

    Full Text Available Abstract In this article, we present a new voice activity detection (VAD algorithm that is based on statistical models and empirical rule-based energy detection algorithm. Specifically, it needs two steps to separate speech segments from background noise. For the first step, the VAD detects possible speech endpoints efficiently using the empirical rule-based energy detection algorithm. However, the possible endpoints are not accurate enough when the signal-to-noise ratio is low. Therefore, for the second step, we propose a new gaussian mixture model-based multiple-observation log likelihood ratio algorithm to align the endpoints to their optimal positions. Several experiments are conducted to evaluate the proposed VAD on both accuracy and efficiency. The results show that it could achieve better performance than the six referenced VADs in various noise scenarios.

  5. Application of a single-objective, hybrid genetic algorithm approach to pharmacokinetic model building.

    Science.gov (United States)

    Sherer, Eric A; Sale, Mark E; Pollock, Bruce G; Belani, Chandra P; Egorin, Merrill J; Ivy, Percy S; Lieberman, Jeffrey A; Manuck, Stephen B; Marder, Stephen R; Muldoon, Matthew F; Scher, Howard I; Solit, David B; Bies, Robert R

    2012-08-01

    A limitation in traditional stepwise population pharmacokinetic model building is the difficulty in handling interactions between model components. To address this issue, a method was previously introduced which couples NONMEM parameter estimation and model fitness evaluation to a single-objective, hybrid genetic algorithm for global optimization of the model structure. In this study, the generalizability of this approach for pharmacokinetic model building is evaluated by comparing (1) correct and spurious covariate relationships in a simulated dataset resulting from automated stepwise covariate modeling, Lasso methods, and single-objective hybrid genetic algorithm approaches to covariate identification and (2) information criteria values, model structures, convergence, and model parameter values resulting from manual stepwise versus single-objective, hybrid genetic algorithm approaches to model building for seven compounds. Both manual stepwise and single-objective, hybrid genetic algorithm approaches to model building were applied, blinded to the results of the other approach, for selection of the compartment structure as well as inclusion and model form of inter-individual and inter-occasion variability, residual error, and covariates from a common set of model options. For the simulated dataset, stepwise covariate modeling identified three of four true covariates and two spurious covariates; Lasso identified two of four true and 0 spurious covariates; and the single-objective, hybrid genetic algorithm identified three of four true covariates and one spurious covariate. For the clinical datasets, the Akaike information criterion was a median of 22.3 points lower (range of 470.5 point decrease to 0.1 point decrease) for the best single-objective hybrid genetic-algorithm candidate model versus the final manual stepwise model: the Akaike information criterion was lower by greater than 10 points for four compounds and differed by less than 10 points for three

  6. Model Predictive Control Algorithms for Pen and Pump Insulin Administration

    DEFF Research Database (Denmark)

    Boiroux, Dimitri

    parameters are personalized using a priori available patient information. We consider an autoregressive integrated moving average with exogenous input (ARIMAX) model. Wesummarize and the results of the overnight clinical studies conducted at Hvidovre Hospital. Based on these results, we propose improvements...... ARMAX model in which we estimate the parameters of the stochastic part using a Recursive Least Square (RLS) method. We test the controller in a virtual clinic of 100 patients. This virtual clinic is based on the Hovorka model. We consider the case where only half of the bolus is administrated...

  7. A Formal Verification Model for Performance Analysis of Reinforcement Learning Algorithms Applied t o Dynamic Networks

    OpenAIRE

    Shrirang Ambaji KULKARNI; Raghavendra G . RAO

    2017-01-01

    Routing data packets in a dynamic network is a difficult and important problem in computer networks. As the network is dynamic, it is subject to frequent topology changes and is subject to variable link costs due to congestion and bandwidth. Existing shortest path algorithms fail to converge to better solutions under dynamic network conditions. Reinforcement learning algorithms posses better adaptation techniques in dynamic environments. In this paper we apply model based Q-Routing technique ...

  8. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model

    OpenAIRE

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-01-01

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are d...

  9. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    Science.gov (United States)

    Fernandez-Lozano, C.; Canto, C.; Gestal, M.; Andrade-Garda, J. M.; Rabuñal, J. R.; Dorado, J.; Pazos, A.

    2013-01-01

    Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected. PMID:24453933

  10. Model-based fault diagnosis techniques design schemes, algorithms, and tools

    CERN Document Server

    Ding, Steven

    2008-01-01

    The objective of this book is to introduce basic model-based FDI schemes, advanced analysis and design algorithms, and the needed mathematical and control theory tools at a level for graduate students and researchers as well as for engineers. This is a textbook with extensive examples and references. Most methods are given in the form of an algorithm that enables a direct implementation in a programme. Comparisons among different methods are included when possible.

  11. Belief Bisimulation for Hidden Markov Models Logical Characterisation and Decision Algorithm

    DEFF Research Database (Denmark)

    Jansen, David N.; Nielson, Flemming; Zhang, Lijun

    2012-01-01

    This paper establishes connections between logical equivalences and bisimulation relations for hidden Markov models (HMM). Both standard and belief state bisimulations are considered. We also present decision algorithms for the bisimilarities. For standard bisimilarity, an extension of the usual...... partition refinement algorithm is enough. Belief bisimilarity, being a relation on the continuous space of belief states, cannot be described directly. Instead, we show how to generate a linear equation system in time cubic in the number of states....

  12. Statistical equivalence of prediction models of the soil sorption coefficient obtained using different log P algorithms.

    Science.gov (United States)

    Olguin, Carlos José Maria; Sampaio, Silvio César; Dos Reis, Ralpho Rinaldo

    2017-10-01

    The soil sorption coefficient normalized to the organic carbon content (K oc ) is a physicochemical parameter used in environmental risk assessments and in determining the final fate of chemicals released into the environment. Several models for predicting this parameter have been proposed based on the relationship between log K oc and log P. The difficulty and cost of obtaining experimental log P values led to the development of algorithms to calculate these values, some of which are free to use. However, quantitative structure-property relationship (QSPR) studies did not detail how or why a particular algorithm was chosen. In this study, we evaluated several free algorithms for calculating log P in the modeling of log K oc , using a broad and diverse set of compounds (n = 639) that included several chemical classes. In addition, we propose the adoption of a simple test to verify if there is statistical equivalence between models obtained using different data sets. Our results showed that the ALOGPs, KOWWIN and XLOGP3 algorithms generated the best models for modeling K oc , and these models are statistically equivalent. This finding shows that it is possible to use the different algorithms without compromising statistical quality and predictive capacity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Modelling and Quantitative Analysis of LTRACK–A Novel Mobility Management Algorithm

    Directory of Open Access Journals (Sweden)

    Benedek Kovács

    2006-01-01

    Full Text Available This paper discusses the improvements and parameter optimization issues of LTRACK, a recently proposed mobility management algorithm. Mathematical modelling of the algorithm and the behavior of the Mobile Node (MN are used to optimize the parameters of LTRACK. A numerical method is given to determine the optimal values of the parameters. Markov chains are used to model both the base algorithm and the so-called loop removal effect. An extended qualitative and quantitative analysis is carried out to compare LTRACK to existing handover mechanisms such as MIP, Hierarchical Mobile IP (HMIP, Dynamic Hierarchical Mobility Management Strategy (DHMIP, Telecommunication Enhanced Mobile IP (TeleMIP, Cellular IP (CIP and HAWAII. LTRACK is sensitive to network topology and MN behavior so MN movement modelling is also introduced and discussed with different topologies. The techniques presented here can not only be used to model the LTRACK algorithm, but other algorithms too. There are many discussions and calculations to support our mathematical model to prove that it is adequate in many cases. The model is valid on various network levels, scalable vertically in the ISO-OSI layers and also scales well with the number of network elements.

  14. Model Justified Search Algorithms for Scheduling Under Uncertainty

    National Research Council Canada - National Science Library

    Howe, Adele; Whitley, L. D

    2008-01-01

    .... We also identified plateaus as a significant barrier to superb performance of local search on scheduling and have studied several canonical discrete optimization problems to discover and model the nature of plateaus...

  15. An Introduction to Model Selection: Tools and Algorithms

    Directory of Open Access Journals (Sweden)

    Sébastien Hélie

    2006-03-01

    Full Text Available Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans prevents the use of Popper’s falsification principle (which is the norm in other sciences. Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood, as well as five methods to compare model fits (the likelihood ratio test, Akaike’s information criterion, the Bayesian information criterion, bootstrapping and cross-validation, are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed.

  16. ARCHITECTURES AND ALGORITHMS FOR COGNITIVE NETWORKS ENABLED BY QUALITATIVE MODELS

    DEFF Research Database (Denmark)

    Balamuralidhar, P.

    2013-01-01

    Complexity of communication networks is ever increasing and getting complicated by their heterogeneity and dynamism. Traditional techniques are facing challenges in network performance management. Cognitive networking is an emerging paradigm to make networks more intelligent, thereby overcoming...... of the cognitive engine that incorporates a context space based information structure to its knowledge model. I propose a set of guiding principles behind a cognitive system to be autonomic and use them with additional requirements to build a detailed architecture for the cognitive engine. I define a context space...... structure integrating various information structures that are required for the knowledge model. Use graphical models towards representing and reasoning about context space is a direction followed here. Specifically I analyze the framework of qualitative models for their suitability to represent the dynamic...

  17. Applications of flocking algorithms to input modeling for agent movement

    OpenAIRE

    Singham, Dashi; Therkildsen, Meredith; Schruben, Lee

    2011-01-01

    Refereed Conference Paper The article of record as published can be found at http://dx.doi.org/10.1109/WSC.2011.6147953 Simulation flocking has been introduced as a method for generating simulation input from multivariate dependent time series for sensitivity and risk analysis. It can be applied to data for which a parametric model is not readily available or imposes too many restrictions on the possible inputs. This method uses techniques from agent-based modeling to generate ...

  18. Reasoning with probabilistic and deterministic graphical models exact algorithms

    CERN Document Server

    Dechter, Rina

    2013-01-01

    Graphical models (e.g., Bayesian and constraint networks, influence diagrams, and Markov decision processes) have become a central paradigm for knowledge representation and reasoning in both artificial intelligence and computer science in general. These models are used to perform many reasoning tasks, such as scheduling, planning and learning, diagnosis and prediction, design, hardware and software verification, and bioinformatics. These problems can be stated as the formal tasks of constraint satisfaction and satisfiability, combinatorial optimization, and probabilistic inference. It is well

  19. Dataflow-Driven Crowdsourcing: Relational Models and Algorithms

    OpenAIRE

    D. A. Ustalov

    2016-01-01

    Recently, microtask crowdsourcing has become a popular approach for addressing various data mining problems. Crowdsourcing workflows for approaching such problems are composed of several data processing stages which require consistent representation for making the work reproducible. This paper is devoted to the problem of reproducibility and formalization of the microtask crowdsourcing process. A computational model for microtask crowdsourcing based on an extended relational model and a dataf...

  20. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model

    Directory of Open Access Journals (Sweden)

    Chunsheng Guo

    2015-09-01

    Full Text Available Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  1. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    Science.gov (United States)

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  2. Computational Modeling of Teaching and Learning through Application of Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Richard Lamb

    2015-09-01

    Full Text Available Within the mind, there are a myriad of ideas that make sense within the bounds of everyday experience, but are not reflective of how the world actually exists; this is particularly true in the domain of science. Classroom learning with teacher explanation are a bridge through which these naive understandings can be brought in line with scientific reality. The purpose of this paper is to examine how the application of a Multiobjective Evolutionary Algorithm (MOEA can work in concert with an existing computational-model to effectively model critical-thinking in the science classroom. An evolutionary algorithm is an algorithm that iteratively optimizes machine learning based computational models. The research question is, does the application of an evolutionary algorithm provide a means to optimize the Student Task and Cognition Model (STAC-M and does the optimized model sufficiently represent and predict teaching and learning outcomes in the science classroom? Within this computational study, the authors outline and simulate the effect of teaching on the ability of a “virtual” student to solve a Piagetian task. Using the Student Task and Cognition Model (STAC-M a computational model of student cognitive processing in science class developed in 2013, the authors complete a computational experiment which examines the role of cognitive retraining on student learning. Comparison of the STAC-M and the STAC-M with inclusion of the Multiobjective Evolutionary Algorithm shows greater success in solving the Piagetian science-tasks post cognitive retraining with the Multiobjective Evolutionary Algorithm. This illustrates the potential uses of cognitive and neuropsychological computational modeling in educational research. The authors also outline the limitations and assumptions of computational modeling.

  3. An three-dimensional imaging algorithm based on the radiation model of electric dipole

    International Nuclear Information System (INIS)

    Tian Bo; Zhong Weijun; Tong Chuangming

    2011-01-01

    A three-dimensional imaging algorithm based on the radiation model of dipole (DBP) is presented. On the foundation of researching the principle of the back projection (BP) algorithm, the relationship between the near field imaging model and far field imaging model is analyzed based on the scattering model. Firstly, the far field sampling data is transferred to the near field sampling data through applying the radiation theory of dipole. Then the dealt sampling data was projected to the imaging region to obtain the images of targets. The capability of the new algorithm to detect targets is verified by using finite-difference time-domain method (FDTD), and the coupling effect for imaging is analyzed. (authors)

  4. An API for Integrating Spatial Context Models with Spatial Reasoning Algorithms

    DEFF Research Database (Denmark)

    Kjærgaard, Mikkel Baun

    2006-01-01

    The integration of context-aware applications with spatial context models is often done using a common query language. However, algorithms that estimate and reason about spatial context information can benefit from a tighter integration. An object-oriented API makes such integration possible...... and can help reduce the complexity of algorithms making them easier to maintain and develop. This paper propose an object-oriented API for context models of the physical environment and extensions to a location modeling approach called geometric space trees for it to provide adequate support for location...... modeling. The utility of the API is evaluated in several real-world cases from an indoor location system, and spans several types of spatial reasoning algorithms....

  5. Optimization Model and Algorithm Design for Airline Fleet Planning in a Multiairline Competitive Environment

    Directory of Open Access Journals (Sweden)

    Yu Wang

    2015-01-01

    Full Text Available This paper presents a multiobjective mathematical programming model to optimize airline fleet size and structure with consideration of several critical factors severely affecting the fleet planning process. The main purpose of this paper is to reveal how multiairline competitive behaviors impact airline fleet size and structure by enhancing the existing route-based fleet planning model with consideration of the interaction between market share and flight frequency and also by applying the concept of equilibrium optimum to design heuristic algorithm for solving the model. Through case study and comparison, the heuristic algorithm is proved to be effective. By using the algorithm presented in this paper, the fleet operational profit is significantly increased compared with the use of the existing route-based model. Sensitivity analysis suggests that the fleet size and structure are more sensitive to the increase of fare price than to the increase of passenger demand.

  6. Parallel Genetic Algorithms for calibrating Cellular Automata models: Application to lava flows

    International Nuclear Information System (INIS)

    D'Ambrosio, D.; Spataro, W.; Di Gregorio, S.; Calabria Univ., Cosenza; Crisci, G.M.; Rongo, R.; Calabria Univ., Cosenza

    2005-01-01

    Cellular Automata are highly nonlinear dynamical systems which are suitable far simulating natural phenomena whose behaviour may be specified in terms of local interactions. The Cellular Automata model SCIARA, developed far the simulation of lava flows, demonstrated to be able to reproduce the behaviour of Etnean events. However, in order to apply the model far the prediction of future scenarios, a thorough calibrating phase is required. This work presents the application of Genetic Algorithms, general-purpose search algorithms inspired to natural selection and genetics, far the parameters optimisation of the model SCIARA. Difficulties due to the elevated computational time suggested the adoption a Master-Slave Parallel Genetic Algorithm far the calibration of the model with respect to the 2001 Mt. Etna eruption. Results demonstrated the usefulness of the approach, both in terms of computing time and quality of performed simulations

  7. A Dynamic Traffic Signal Timing Model and its Algorithm for Junction of Urban Road

    DEFF Research Database (Denmark)

    Cai, Yanguang; Cai, Hao

    2012-01-01

    As an important part of Intelligent Transportation System, the scientific traffic signal timing of junction can improve the efficiency of urban transport. This paper presents a novel dynamic traffic signal timing model. According to the characteristics of the model, hybrid chaotic quantum......-time and dynamic signal control of junction. To obtain the optimal solution of the model by hybrid chaotic quantum evolutionary algorithm, the model is converted to an easily solvable form. To simplify calculation, we give the expression of the partial derivative and change rate of the objective function...... such that the implementation of the algorithm only involves function assignments and arithmetic operations and thus avoids complex operations such as integral and differential. Simulation results show that the algorithm has less remain vehicles than Webster method, higher convergence rate and convergence speed than quantum...

  8. Model and Algorithm for Substantiating Solutions for Organization of High-Rise Construction Project

    Science.gov (United States)

    Anisimov, Vladimir; Anisimov, Evgeniy; Chernysh, Anatoliy

    2018-03-01

    In the paper the models and the algorithm for the optimal plan formation for the organization of the material and logistical processes of the high-rise construction project and their financial support are developed. The model is based on the representation of the optimization procedure in the form of a non-linear problem of discrete programming, which consists in minimizing the execution time of a set of interrelated works by a limited number of partially interchangeable performers while limiting the total cost of performing the work. The proposed model and algorithm are the basis for creating specific organization management methodologies for the high-rise construction project.

  9. A Convex Optimization Model and Algorithm for Retinex

    Directory of Open Access Journals (Sweden)

    Qing-Nan Zhao

    2017-01-01

    Full Text Available Retinex is a theory on simulating and explaining how human visual system perceives colors under different illumination conditions. The main contribution of this paper is to put forward a new convex optimization model for Retinex. Different from existing methods, the main idea is to rewrite a multiplicative form such that the illumination variable and the reflection variable are decoupled in spatial domain. The resulting objective function involves three terms including the Tikhonov regularization of the illumination component, the total variation regularization of the reciprocal of the reflection component, and the data-fitting term among the input image, the illumination component, and the reciprocal of the reflection component. We develop an alternating direction method of multipliers (ADMM to solve the convex optimization model. Numerical experiments demonstrate the advantages of the proposed model which can decompose an image into the illumination and the reflection components.

  10. Quadratic adaptive algorithm for solving cardiac action potential models.

    Science.gov (United States)

    Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing

    2016-10-01

    An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights

  11. An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.

    Science.gov (United States)

    Zhang, Ye; Yu, Tenglong; Wang, Wenwu

    2014-01-01

    Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  12. An Analysis Dictionary Learning Algorithm under a Noisy Data Model with Orthogonality Constraint

    Directory of Open Access Journals (Sweden)

    Ye Zhang

    2014-01-01

    Full Text Available Two common problems are often encountered in analysis dictionary learning (ADL algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high, as represented by the Analysis K-SVD (AK-SVD algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  13. How effective and efficient are multiobjective evolutionary algorithms at hydrologic model calibration?

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2006-01-01

    Full Text Available This study provides a comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO tools' relative effectiveness in calibrating hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (ε-NSGAII, the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA, and the Strength Pareto Evolutionary Algorithm 2 (SPEA2. This study uses three test cases to compare the algorithms' performances: (1 a standardized test function suite from the computer science literature, (2 a benchmark hydrologic calibration test case for the Leaf River near Collins, Mississippi, and (3 a computationally intensive integrated surface-subsurface model application in the Shale Hills watershed in Pennsylvania. One challenge and contribution of this work is the development of a methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques. Overall, SPEA2 attained competitive to superior results for most of the problems tested in this study. The primary strengths of the SPEA2 algorithm lie in its search reliability and its diversity preservation operator. The biggest challenge in maximizing the performance of SPEA2 lies in specifying an effective archive size without a priori knowledge of the Pareto set. In practice, this would require significant trial-and-error analysis, which is problematic for more complex, computationally intensive calibration applications. ε-NSGAII appears to be superior to MOSCEM-UA and competitive with SPEA2 for hydrologic model calibration. ε-NSGAII's primary strength lies in its ease-of-use due to its dynamic population sizing and archiving which lead to rapid convergence to very high quality solutions with minimal user input. MOSCEM-UA is best suited for hydrologic model calibration applications that have small

  14. Optimisation of Hidden Markov Model using Baum–Welch algorithm ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 126; Issue 1. Optimisation of Hidden Markov ... The present work is a part of development of Hidden Markov Model (HMM) based avalanche forecasting system for Pir-Panjal and Great Himalayan mountain ranges of the Himalaya. In this work, HMMs have been ...

  15. New advances in spatial network modelling: towards evolutionary algorithms

    NARCIS (Netherlands)

    Reggiani, A; Nijkamp, P.; Sabella, E.

    2001-01-01

    This paper discusses analytical advances in evolutionary methods with a view towards their possible applications in the space-economy. For this purpose, we present a brief overview and illustration of models actually available in the spatial sciences which attempt to map the complex patterns of

  16. Algorithm for dealing with depressions in dynamic landscape evolution models

    NARCIS (Netherlands)

    Temme, A.J.A.M.; Schoorl, J.M.; Veldkamp, A.

    2006-01-01

    Depressions in landscapes function as buffers for water and sediment. A landscape with depressions has less runoff, less erosion and more sedimentation than a landscape without depressions. Sinks in digital elevation models (DEMs) can be existing features that correctly represent depressions in

  17. Design of Learning Model of Logic and Algorithms Based on APOS Theory

    Science.gov (United States)

    Hartati, Sulis Janu

    2014-01-01

    This research questions were "how do the characteristics of learning model of logic & algorithm according to APOS theory" and "whether or not these learning model can improve students learning outcomes". This research was conducted by exploration, and quantitative approach. Exploration used in constructing theory about the…

  18. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  19. Estimation of Item Response Models Using the EM Algorithm for Finite Mixtures.

    Science.gov (United States)

    Woodruff, David J.; Hanson, Bradley A.

    This paper presents a detailed description of maximum parameter estimation for item response models using the general EM algorithm. In this paper the models are specified using a univariate discrete latent ability variable. When the latent ability variable is discrete the distribution of the observed item responses is a finite mixture, and the EM…

  20. Comparing fire spread algorithms using equivalence testing and neutral landscape models

    Science.gov (United States)

    Brian R. Miranda; Brian R. Sturtevant; Jian Yang; Eric J. Gustafson

    2009-01-01

    We demonstrate a method to evaluate the degree to which a meta-model approximates spatial disturbance processes represented by a more detailed model across a range of landscape conditions, using neutral landscapes and equivalence testing. We illustrate this approach by comparing burn patterns produced by a relatively simple fire spread algorithm with those generated by...

  1. Algorithms for global total least squares modelling of finite multivariable time series

    NARCIS (Netherlands)

    Roorda, Berend

    1995-01-01

    In this paper we present several algorithms related to the global total least squares (GTLS) modelling of multivariable time series observed over a finite time interval. A GTLS model is a linear, time-invariant finite-dimensional system with a behaviour that has minimal Frobenius distance to a given

  2. Continuous time Boolean modeling for biological signaling: application of Gillespie algorithm.

    OpenAIRE

    Stoll, Gautier; Viara, Eric; Barillot, Emmanuel; Calzone, Laurence

    2012-01-01

    Abstract Mathematical modeling is used as a Systems Biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and predict the effect of perturbations. This article presents an algorithm for modeling biological networks in a discrete framework with continuous time. Background There exist two major types of mathematical modeling approaches: (1) quantitative modeling, representing various chemical species concentrations by real...

  3. An Efficient Algorithm for Modelling Duration in Hidden Markov Models, with a Dramatic Application

    DEFF Research Database (Denmark)

    Hauberg, Søren; Sloth, Jakob

    2008-01-01

    For many years, the hidden Markov model (HMM) has been one of the most popular tools for analysing sequential data. One frequently used special case is the left-right model, in which the order of the hidden states is known. If knowledge of the duration of a state is available it is not possible...... to represent it explicitly with an HMM. Methods for modelling duration with HMM's do exist (Rabiner in Proc. IEEE 77(2):257---286, [1989]), but they come at the price of increased computational complexity. Here we present an efficient and robust algorithm for modelling duration in HMM's, and this algorithm...

  4. Development of web-based reliability data analysis algorithm model and its application

    International Nuclear Information System (INIS)

    Hwang, Seok-Won; Oh, Ji-Yong; Moosung-Jae

    2010-01-01

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  5. A formally verified algorithm for interactive consistency under a hybrid fault model

    Science.gov (United States)

    Lincoln, Patrick; Rushby, John

    1993-01-01

    Consistent distribution of single-source data to replicated computing channels is a fundamental problem in fault-tolerant system design. The 'Oral Messages' (OM) algorithm solves this problem of Interactive Consistency (Byzantine Agreement) assuming that all faults are worst-cass. Thambidurai and Park introduced a 'hybrid' fault model that distinguished three fault modes: asymmetric (Byzantine), symmetric, and benign; they also exhibited, along with an informal 'proof of correctness', a modified version of OM. Unfortunately, their algorithm is flawed. The discipline of mechanically checked formal verification eventually enabled us to develop a correct algorithm for Interactive Consistency under the hybrid fault model. This algorithm withstands $a$ asymmetric, $s$ symmetric, and $b$ benign faults simultaneously, using $m+1$ rounds, provided $n is greater than 2a + 2s + b + m$, and $m\\geg a$. We present this algorithm, discuss its subtle points, and describe its formal specification and verification in PVS. We argue that formal verification systems such as PVS are now sufficiently effective that their application to fault-tolerance algorithms should be considered routine.

  6. DEVELOPMENT OF A HYBRID FUZZY GENETIC ALGORITHM MODEL FOR SOLVING TRANSPORTATION SCHEDULING PROBLEM

    Directory of Open Access Journals (Sweden)

    H.C.W. Lau

    2015-12-01

    Full Text Available There has been an increasing public demand for passenger rail service in the recent times leading to a strong focus on the need for effective and efficient use of resources and managing the increasing passenger requirements, service reliability and variability by the railway management. Whilst shortening the passengers’ waiting and travelling time is important for commuter satisfaction, lowering operational costs is equally important for railway management. Hence, effective and cost optimised train scheduling based on the dynamic passenger demand is one of the main issues for passenger railway management. Although the passenger railway scheduling problem has received attention in operations research in recent years, there is limited literature investigating the adoption of practical approaches that capitalize on the merits of mathematical modeling and search algorithms for effective cost optimization. This paper develops a hybrid fuzzy logic based genetic algorithm model to solve the multi-objective passenger railway scheduling problem aiming to optimize total operational costs at a satisfactory level of customer service. This hybrid approach integrates genetic algorithm with the fuzzy logic approach which uses the fuzzy controller to determine the crossover rate and mutation rate in genetic algorithm approach in the optimization process. The numerical study demonstrates the improvement of the proposed hybrid approach, and the fuzzy genetic algorithm has demonstrated its effectiveness to generate better results than standard genetic algorithm and other traditional heuristic approaches, such as simulated annealing.

  7. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    Science.gov (United States)

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  8. A mesh-free algorithm for ROF model

    Science.gov (United States)

    Khan, Mushtaq Ahmad; Chen, Wen; Ullah, Asmat; Fu, Zhuojia

    2017-12-01

    The total variation (TV) denoising method is a PDE-based technique that preserves the edges well but has undesirable staircase effect in some cases, namely, the translation of smooth regions (ramps) into piecewise constant regions (stairs). This paper introduces a novel mesh-free approach using TV (ROF model) regularization and radial basis function (RBF) for the numerical approximation of TV-based model to remove the additive noise from the measurements. This approach is structured on local collocation and multiquadric radial basis function. These features enable this strategy not only to eliminate noise from images and preserve the edges but also has the advantage to minimize the staircase effect substantially from real and artificial images which cause the image to look blocky. Experimental results demonstrate that the proposed mesh-free approach is robust and performs well in visual improvement as well as peak signal-to-noise ratio compared with the recent partial differential equation (PDE)-based traditional methods.

  9. Gas Emission Prediction Model of Coal Mine Based on CSBP Algorithm

    Directory of Open Access Journals (Sweden)

    Xiong Yan

    2016-01-01

    Full Text Available In view of the nonlinear characteristics of gas emission in a coal working face, a prediction method is proposed based on cuckoo search algorithm optimized BP neural network (CSBP. In the CSBP algorithm, the cuckoo search is adopted to optimize weight and threshold parameters of BP network, and obtains the global optimal solutions. Furthermore, the twelve main affecting factors of the gas emission in the coal working face are taken as input vectors of CSBP algorithm, the gas emission is acted as output vector, and then the prediction model of BP neural network with optimal parameters is established. The results show that the CSBP algorithm has batter generalization ability and higher prediction accuracy, and can be utilized effectively in the prediction of coal mine gas emission.

  10. Modeling and Sensitivity Study of Consensus Algorithm-Based Distributed Hierarchical Control for DC Microgrids

    DEFF Research Database (Denmark)

    Meng, Lexuan; Dragicevic, Tomislav; Roldan Perez, Javier

    2016-01-01

    in the communication network, continuous-time methods can be inaccurate for this kind of dynamic study. Therefore, this paper aims at modeling a complete DC MG using a discrete-time approach in order to perform a sensitivity analysis taking into account the effects of the consensus algorithm. To this end......Distributed control methods based on consensus algorithms have become popular in recent years for microgrid (MG) systems. These kinds of algorithms can be applied to share information in order to coordinate multiple distributed generators within a MG. However, stability analysis becomes...... a challenging issue when these kinds of algorithms are used, since the dynamics of the electrical and the communication systems interact with each other. Moreover, the transmission rate and topology of the communication network also affect the system dynamics. Due to discrete nature of the information exchange...

  11. Fitting Social Network Models Using Varying Truncation Stochastic Approximation MCMC Algorithm

    KAUST Repository

    Jin, Ick Hoon

    2013-10-01

    The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  12. Dynamic gradient descent learning algorithms for enhanced empirical modeling of power plants

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, Amir; Chong, K.T.

    1991-01-01

    A newly developed dynamic gradient descent-based learning algorithm is used to train a recurrent multilayer perceptron network for use in empirical modeling of power plants. The two main advantages of the proposed learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation, instead of one forward and one backward pass of the backpropagation algorithm. The latter advantage results in computational time saving because both passes can be performed simultaneously. The dynamic learning algorithm is used to train a hybrid feedforward/feedback neural network, a recurrent multilayer perceptron, which was previously found to exhibit good interpolation and extrapolation capabilities in modeling nonlinear dynamic systems. One of the drawbacks, however, of the previously reported work has been the long training times associated with accurate empirical models. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm are demonstrated by a case study of a steam power plant. The number of iterations required for accurate empirical modeling has been reduced from tens of thousands to hundreds, thus significantly expediting the learning process

  13. A novel computer algorithm for modeling and treating mandibular fractures: A pilot study.

    Science.gov (United States)

    Rizzi, Christopher J; Ortlip, Timothy; Greywoode, Jewel D; Vakharia, Kavita T; Vakharia, Kalpesh T

    2017-02-01

    To describe a novel computer algorithm that can model mandibular fracture repair. To evaluate the algorithm as a tool to model mandibular fracture reduction and hardware selection. Retrospective pilot study combined with cross-sectional survey. A computer algorithm utilizing Aquarius Net (TeraRecon, Inc, Foster City, CA) and Adobe Photoshop CS6 (Adobe Systems, Inc, San Jose, CA) was developed to model mandibular fracture repair. Ten different fracture patterns were selected from nine patients who had already undergone mandibular fracture repair. The preoperative computed tomography (CT) images were processed with the computer algorithm to create virtual images that matched the actual postoperative three-dimensional CT images. A survey comparing the true postoperative image with the virtual postoperative images was created and administered to otolaryngology resident and attending physicians. They were asked to rate on a scale from 0 to 10 (0 = completely different; 10 = identical) the similarity between the two images in terms of the fracture reduction and fixation hardware. Ten mandible fracture cases were analyzed and processed. There were 15 survey respondents. The mean score for overall similarity between the images was 8.41 ± 0.91; the mean score for similarity of fracture reduction was 8.61 ± 0.98; and the mean score for hardware appearance was 8.27 ± 0.97. There were no significant differences between attending and resident responses. There were no significant differences based on fracture location. This computer algorithm can accurately model mandibular fracture repair. Images created by the algorithm are highly similar to true postoperative images. The algorithm can potentially assist a surgeon planning mandibular fracture repair. 4. Laryngoscope, 2016 127:331-336, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  14. Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.

    Science.gov (United States)

    Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen

    2017-11-01

    A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.

  15. Stable reduced-order models of generalized dynamical systems using coordinate-transformed Arnoldi algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Silveira, L.M.; Kamon, M.; Elfadel, I.; White, J. [Massachusetts Inst. of Technology, Cambridge, MA (United States)

    1996-12-31

    Model order reduction based on Krylov subspace iterative methods has recently emerged as a major tool for compressing the number of states in linear models used for simulating very large physical systems (VLSI circuits, electromagnetic interactions). There are currently two main methods for accomplishing such a compression: one is based on the nonsymmetric look-ahead Lanczos algorithm that gives a numerically stable procedure for finding Pade approximations, while the other is based on a less well characterized Arnoldi algorithm. In this paper, we show that for certain classes of generalized state-space systems, the reduced-order models produced by a coordinate-transformed Arnoldi algorithm inherit the stability of the original system. Complete Proofs of our results will be given in the final paper.

  16. Algorithms and Software for Predictive and Perceptual Modeling of Speech

    CERN Document Server

    Atti, Venkatraman

    2010-01-01

    From the early pulse code modulation-based coders to some of the recent multi-rate wideband speech coding standards, the area of speech coding made several significant strides with an objective to attain high quality of speech at the lowest possible bit rate. This book presents some of the recent advances in linear prediction (LP)-based speech analysis that employ perceptual models for narrow- and wide-band speech coding. The LP analysis-synthesis framework has been successful for speech coding because it fits well the source-system paradigm for speech synthesis. Limitations associated with th

  17. Energy demand forecasting in Iranian metal industry using linear and nonlinear models based on evolutionary algorithms

    International Nuclear Information System (INIS)

    Piltan, Mehdi; Shiri, Hiva; Ghaderi, S.F.

    2012-01-01

    Highlights: ► Investigating different fitness functions for evolutionary algorithms in energy forecasting. ► Energy forecasting of Iranian metal industry by value added, energy prices, investment and employees. ► Using real-coded instead of binary-coded genetic algorithm decreases energy forecasting error. - Abstract: Developing energy-forecasting models is known as one of the most important steps in long-term planning. In order to achieve sustainable energy supply toward economic development and social welfare, it is required to apply precise forecasting model. Applying artificial intelligent models for estimation complex economic and social functions is growing up considerably in many researches recently. In this paper, energy consumption in industrial sector as one of the critical sectors in the consumption of energy has been investigated. Two linear and three nonlinear functions have been used in order to forecast and analyze energy in the Iranian metal industry, Particle Swarm Optimization (PSO) and Genetic Algorithms (GAs) are applied to attain parameters of the models. The Real-Coded Genetic Algorithm (RCGA) has been developed based on real numbers, which is introduced as a new approach in the field of energy forecasting. In the proposed model, electricity consumption has been considered as a function of different variables such as electricity tariff, manufacturing value added, prevailing fuel prices, the number of employees, the investment in equipment and consumption in the previous years. Mean Square Error (MSE), Root Mean Square Error (RMSE), Mean Absolute Deviation (MAD) and Mean Absolute Percent Error (MAPE) are the four functions which have been used as the fitness function in the evolutionary algorithms. The results show that the logarithmic nonlinear model using PSO algorithm with 1.91 error percentage has the best answer. Furthermore, the prediction of electricity consumption in industrial sector of Turkey and also Turkish industrial sector

  18. Implicit level set algorithms for modelling hydraulic fracture propagation.

    Science.gov (United States)

    Peirce, A

    2016-10-13

    Hydraulic fractures are tensile cracks that propagate in pre-stressed solid media due to the injection of a viscous fluid. Developing numerical schemes to model the propagation of these fractures is particularly challenging due to the degenerate, hypersingular nature of the coupled integro-partial differential equations. These equations typically involve a singular free boundary whose velocity can only be determined by evaluating a distinguished limit. This review paper describes a class of numerical schemes that have been developed to use the multiscale asymptotic behaviour typically encountered near the fracture boundary as multiple physical processes compete to determine the evolution of the fracture. The fundamental concepts of locating the free boundary using the tip asymptotics and imposing the tip asymptotic behaviour in a weak form are illustrated in two quite different formulations of the governing equations. These formulations are the displacement discontinuity boundary integral method and the extended finite-element method. Practical issues are also discussed, including new models for proppant transport able to capture 'tip screen-out'; efficient numerical schemes to solve the coupled nonlinear equations; and fast methods to solve resulting linear systems. Numerical examples are provided to illustrate the performance of the numerical schemes. We conclude the paper with open questions for further research. This article is part of the themed issue 'Energy and the subsurface'. © 2016 The Author(s).

  19. Uncertainty analysis of hydrological modeling in a tropical area using different algorithms

    Science.gov (United States)

    Rafiei Emam, Ammar; Kappas, Martin; Fassnacht, Steven; Linh, Nguyen Hoang Khanh

    2018-01-01

    Hydrological modeling outputs are subject to uncertainty resulting from different sources of errors (e.g., error in input data, model structure, and model parameters), making quantification of uncertainty in hydrological modeling imperative and meant to improve reliability of modeling results. The uncertainty analysis must solve difficulties in calibration of hydrological models, which further increase in areas with data scarcity. The purpose of this study is to apply four uncertainty analysis algorithms to a semi-distributed hydrological model, quantifying different source of uncertainties (especially parameter uncertainty) and evaluate their performance. In this study, the Soil and Water Assessment Tools (SWAT) eco-hydrological model was implemented for the watershed in the center of Vietnam. The sensitivity of parameters was analyzed, and the model was calibrated. The uncertainty analysis for the hydrological model was conducted based on four algorithms: Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting (SUFI), Parameter Solution method (ParaSol) and Particle Swarm Optimization (PSO). The performance of the algorithms was compared using P-factor and Rfactor, coefficient of determination (R 2), the Nash Sutcliffe coefficient of efficiency (NSE) and Percent Bias (PBIAS). The results showed the high performance of SUFI and PSO with P-factor>0.83, R-factor 0.91, NSE>0.89, and 0.18model use for policy or management decisions.

  20. Efficient algorithms for multiscale modeling in porous media

    KAUST Repository

    Wheeler, Mary F.

    2010-09-26

    We describe multiscale mortar mixed finite element discretizations for second-order elliptic and nonlinear parabolic equations modeling Darcy flow in porous media. The continuity of flux is imposed via a mortar finite element space on a coarse grid scale, while the equations in the coarse elements (or subdomains) are discretized on a fine grid scale. We discuss the construction of multiscale mortar basis and extend this concept to nonlinear interface operators. We present a multiscale preconditioning strategy to minimize the computational cost associated with construction of the multiscale mortar basis. We also discuss the use of appropriate quadrature rules and approximation spaces to reduce the saddle point system to a cell-centered pressure scheme. In particular, we focus on multiscale mortar multipoint flux approximation method for general hexahedral grids and full tensor permeabilities. Numerical results are presented to verify the accuracy and efficiency of these approaches. © 2010 John Wiley & Sons, Ltd.

  1. Insolation models, data and algorithms. Annual report FY78

    Energy Technology Data Exchange (ETDEWEB)

    Hulstrom, R. L.

    1978-12-01

    The FY78 objectives, descriptions, and results of insolation research tasks of the Solar Energy Research Institute's (SERI) Energy Resource Assessment Branch (ERAB) are presented. The tasks performed during FY78, the first year of operation for SERI/ERAB, addressed the resources of insolation (''sunshine'') and wind. Described in this report is the insolation portion of the FY78 ERAB efforts, which resulted in operational computer models for the thermal (broadband) and spectral insolation, a data base (SOLMET) for the U.S. geographical distribution of thermal insolation, preliminary research measurements of the thermal insolation on tilted surfaces, and a complete design concept of advanced instrumentation to measure automatically the insolation on 37 tilted surfaces at various orientations.

  2. Calibration of parameters of water supply network model using genetic algorithm

    Science.gov (United States)

    Boczar, Tomasz; Adamikiewicz, Norbert; Stanisławski, Włodzimierz

    2017-10-01

    Computer simulation models of water supply networks are commonly applied in the water industry. As part of the research works, results of which are presented in the paper, OFF-LINE and ON-LINE calibration of water supply network model parameters using two methods was carried out and compared. The network skeleton was developed in the Epanet software. For optimization two types of dependent variables were subjected: the pressure on the node and volume flow in the network section. The first calibration method regards to application of the genetic algorithm, which is a build in plugin - "Epanet Calibrator". The second method was related to the use of function ga, which is implemented in the MATLAB toolbox Genetic Algorithm and Direct Search. The possibilities of application of these algorithms to solve the issue of optimizing the parameters of the created model of water supply network in both cases: OFF-LINE and ON-LINE calibration was examined. An analysis of the effectiveness of the considered algorithms for different values of configuration parameters was performed. Based on the achieved results it was stated that application of the ga algorithm gives higher correlation of the calibrated values to the empirical data.

  3. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  4. Parameter Estimation for Traffic Noise Models Using a Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Deok-Soon An

    2013-01-01

    Full Text Available A technique has been developed for predicting road traffic noise for environmental assessment, taking into account traffic volume as well as road surface conditions. The ASJ model (ASJ Prediction Model for Road Traffic Noise, 1999, which is based on the sound power level of the noise emitted by the interaction between the road surface and tires, employs regression models for two road surface types: dense-graded asphalt (DGA and permeable asphalt (PA. However, these models are not applicable to other types of road surfaces. Accordingly, this paper introduces a parameter estimation procedure for ASJ-based noise prediction models, utilizing a harmony search (HS algorithm. Traffic noise measurement data for four different vehicle types were used in the algorithm to determine the regression parameters for several road surface types. The parameters of the traffic noise prediction models were evaluated using another measurement set, and good agreement was observed between the predicted and measured sound power levels.

  5. mMWeb--an online platform for employing multiple ecological niche modeling algorithms.

    Science.gov (United States)

    Qiao, Huijie; Lin, Congtian; Ji, Liqiang; Jiang, Zhigang

    2012-01-01

    Predicting the ecological niche and potential habitat distribution of a given organism is one of the central domains of ecological and biogeographical research. A wide variety of modeling techniques have been developed for this purpose. In order to implement these models, the users must prepare a specific runtime environment for each model, learn how to use multiple model platforms, and prepare data in a different format each time. Additionally, often model results are difficult to interpret, and a standardized method for comparing model results across platforms does not exist. We developed a free and open source online platform, the multi-models web-based (mMWeb) platform, to address each of these problems, providing a novel environment in which the user can implement and compare multiple ecological niche model (ENM) algorithms. mMWeb combines 18 existing ENMs and their corresponding algorithms and provides a uniform procedure for modeling the potential habitat niche of a species via a common web browser. mMWeb uses Java Native Interface (JNI), Java R Interface to combine the different ENMs and executes multiple tasks in parallel on a super computer. The cross-platform, user-friendly interface of mMWeb simplifies the process of building ENMs, providing an accessible and efficient environment from which to explore and compare different model algorithms.

  6. mMWeb--an online platform for employing multiple ecological niche modeling algorithms.

    Directory of Open Access Journals (Sweden)

    Huijie Qiao

    Full Text Available BACKGROUND: Predicting the ecological niche and potential habitat distribution of a given organism is one of the central domains of ecological and biogeographical research. A wide variety of modeling techniques have been developed for this purpose. In order to implement these models, the users must prepare a specific runtime environment for each model, learn how to use multiple model platforms, and prepare data in a different format each time. Additionally, often model results are difficult to interpret, and a standardized method for comparing model results across platforms does not exist. We developed a free and open source online platform, the multi-models web-based (mMWeb platform, to address each of these problems, providing a novel environment in which the user can implement and compare multiple ecological niche model (ENM algorithms. METHODOLOGY: mMWeb combines 18 existing ENMs and their corresponding algorithms and provides a uniform procedure for modeling the potential habitat niche of a species via a common web browser. mMWeb uses Java Native Interface (JNI, Java R Interface to combine the different ENMs and executes multiple tasks in parallel on a super computer. The cross-platform, user-friendly interface of mMWeb simplifies the process of building ENMs, providing an accessible and efficient environment from which to explore and compare different model algorithms.

  7. A model-based 3D phase unwrapping algorithm using Gegenbauer polynomials.

    Science.gov (United States)

    Langley, Jason; Zhao, Qun

    2009-09-07

    The application of a two-dimensional (2D) phase unwrapping algorithm to a three-dimensional (3D) phase map may result in an unwrapped phase map that is discontinuous in the direction normal to the unwrapped plane. This work investigates the problem of phase unwrapping for 3D phase maps. The phase map is modeled as a product of three one-dimensional Gegenbauer polynomials. The orthogonality of Gegenbauer polynomials and their derivatives on the interval [-1, 1] are exploited to calculate the expansion coefficients. The algorithm was implemented using two well-known Gegenbauer polynomials: Chebyshev polynomials of the first kind and Legendre polynomials. Both implementations of the phase unwrapping algorithm were tested on 3D datasets acquired from a magnetic resonance imaging (MRI) scanner. The first dataset was acquired from a homogeneous spherical phantom. The second dataset was acquired using the same spherical phantom but magnetic field inhomogeneities were introduced by an external coil placed adjacent to the phantom, which provided an additional burden to the phase unwrapping algorithm. Then Gaussian noise was added to generate a low signal-to-noise ratio dataset. The third dataset was acquired from the brain of a human volunteer. The results showed that Chebyshev implementation and the Legendre implementation of the phase unwrapping algorithm give similar results on the 3D datasets. Both implementations of the phase unwrapping algorithm compare well to PRELUDE 3D, 3D phase unwrapping software well recognized for functional MRI.

  8. Application of artificial neural networks and genetic algorithms for crude fractional distillation process modeling

    OpenAIRE

    Pater, Lukasz

    2016-01-01

    This work presents the application of the artificial neural networks, trained and structurally optimized by genetic algorithms, for modeling of crude distillation process at PKN ORLEN S.A. refinery. Models for the main fractionator distillation column products were developed using historical data. Quality of the fractions were predicted based on several chosen process variables. The performance of the model was validated using test data. Neural networks used in companion with genetic algorith...

  9. New Flexible Models and Design Construction Algorithms for Mixtures and Binary Dependent Variables

    OpenAIRE

    Ruseckaite, Aiste

    2017-01-01

    markdownabstractThis thesis discusses new mixture(-amount) models, choice models and the optimal design of experiments. Two chapters of the thesis relate to the so-called mixture, which is a product or service whose ingredients’ proportions sum to one. The thesis begins by introducing mixture models in the choice context and develops new optimal design construction algorithms for choice experiments involving mixtures. Building further, varying the total amount of a mixture, and not only its i...

  10. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    OpenAIRE

    Keller Alevtina; Vinogradova Tatyana

    2017-01-01

    The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the...

  11. Pharmacokinetics of a single oral dose of vitamin D3 (70,000 IU in pregnant and non-pregnant women

    Directory of Open Access Journals (Sweden)

    Roth Daniel E

    2012-12-01

    Full Text Available Abstract Background Improvements in antenatal vitamin D status may have maternal-infant health benefits. To inform the design of prenatal vitamin D3 trials, we conducted a pharmacokinetic study of single-dose vitamin D3 supplementation in women of reproductive age. Methods A single oral vitamin D3 dose (70,000 IU was administered to 34 non-pregnant and 27 pregnant women (27 to 30 weeks gestation enrolled in Dhaka, Bangladesh (23°N. The primary pharmacokinetic outcome measure was the change in serum 25-hydroxyvitamin D concentration over time, estimated using model-independent pharmacokinetic parameters. Results Baseline mean serum 25-hydroxyvitamin D concentration was 54 nmol/L (95% CI 47, 62 in non-pregnant participants and 39 nmol/L (95% CI 34, 45 in pregnant women. Mean peak rise in serum 25-hydroxyvitamin D concentration above baseline was similar in non-pregnant and pregnant women (28 nmol/L and 32 nmol/L, respectively. However, the rate of rise was slightly slower in pregnant women (i.e., lower 25-hydroxyvitamin D on day 2 and higher 25-hydroxyvitamin D on day 21 versus non-pregnant participants. Overall, average 25-hydroxyvitamin D concentration was 19 nmol/L above baseline during the first month. Supplementation did not induce hypercalcemia, and there were no supplement-related adverse events. Conclusions The response to a single 70,000 IU dose of vitamin D3 was similar in pregnant and non-pregnant women in Dhaka and consistent with previous studies in non-pregnant adults. These preliminary data support the further investigation of antenatal vitamin D3 regimens involving doses of ≤70,000 IU in regions where maternal-infant vitamin D deficiency is common. Trial registration ClinicalTrials.gov (NCT00938600

  12. Liver Segmentation Based on Snakes Model and Improved GrowCut Algorithm in Abdominal CT Image

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2013-01-01

    Full Text Available A novel method based on Snakes Model and GrowCut algorithm is proposed to segment liver region in abdominal CT images. First, according to the traditional GrowCut method, a pretreatment process using K-means algorithm is conducted to reduce the running time. Then, the segmentation result of our improved GrowCut approach is used as an initial contour for the future precise segmentation based on Snakes model. At last, several experiments are carried out to demonstrate the performance of our proposed approach and some comparisons are conducted between the traditional GrowCut algorithm. Experimental results show that the improved approach not only has a better robustness and precision but also is more efficient than the traditional GrowCut method.

  13. A Formal Verification Model for Performance Analysis of Reinforcement Learning Algorithms Applied t o Dynamic Networks

    Directory of Open Access Journals (Sweden)

    Shrirang Ambaji KULKARNI

    2017-04-01

    Full Text Available Routing data packets in a dynamic network is a difficult and important problem in computer networks. As the network is dynamic, it is subject to frequent topology changes and is subject to variable link costs due to congestion and bandwidth. Existing shortest path algorithms fail to converge to better solutions under dynamic network conditions. Reinforcement learning algorithms posses better adaptation techniques in dynamic environments. In this paper we apply model based Q-Routing technique for routing in dynamic network. To analyze the correctness of Q-Routing algorithms mathematically, we provide a proof and also implement a SPIN based verification model. We also perform simulation based analysis of Q-Routing for given metrics.

  14. Modeling and sensitivity analysis of consensus algorithm based distributed hierarchical control for dc microgrids

    DEFF Research Database (Denmark)

    Meng, Lexuan; Dragicevic, Tomislav; Vasquez, Juan Carlos

    2015-01-01

    of dynamic study. The aim of this paper is to model the complete DC microgrid system in z-domain and perform sensitivity analysis for the complete system. A generalized modeling method is proposed and the system dynamics under different control parameters, communication topologies and communication speed......Distributed control methods for microgrid systems become a popular topic in recent years. Distributed algorithms, such as consensus algorithms, can be applied for distributed information sharing. However, by using this kind of algorithms the stability analysis becomes a critical issue since...... the dynamics of electrical and communication systems interact with each other. Apart from that, the communication characteristics also affect the dynamics of the system. Due to discrete nature of information exchange in communication network, Laplace domain analysis is not accurate enough for this kind...

  15. A comparison of two estimation algorithms for Samejima's continuous IRT model.

    Science.gov (United States)

    Zopluoglu, Cengiz

    2013-03-01

    This study compares two algorithms, as implemented in two different computer softwares, that have appeared in the literature for estimating item parameters of Samejima's continuous response model (CRM) in a simulation environment. In addition to the simulation study, a real-data illustration is provided, and CRM is used as a potential psychometric tool for analyzing measurement outcomes in the context of curriculum-based measurement (CBM) in the field of education. The results indicate that a simplified expectation-maximization (EM) algorithm is as effective and efficient as the traditional EM algorithm for estimating the CRM item parameters. The results also show promise for using this psychometric model to analyze CBM outcomes, although more research is needed in order to recommend CRM as a standard practice in the CBM context.

  16. LMI-Based Generation of Feedback Laws for a Robust Model Predictive Control Algorithm

    Science.gov (United States)

    Acikmese, Behcet; Carson, John M., III

    2007-01-01

    This technical note provides a mathematical proof of Corollary 1 from the paper 'A Nonlinear Model Predictive Control Algorithm with Proven Robustness and Resolvability' that appeared in the 2006 Proceedings of the American Control Conference. The proof was omitted for brevity in the publication. The paper was based on algorithms developed for the FY2005 R&TD (Research and Technology Development) project for Small-body Guidance, Navigation, and Control [2].The framework established by the Corollary is for a robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems that guarantees the resolvability of the associated nite-horizon optimal control problem in a receding-horizon implementation. Additional details of the framework are available in the publication.

  17. A hand tracking algorithm with particle filter and improved GVF snake model

    Science.gov (United States)

    Sun, Yi-qi; Wu, Ai-guo; Dong, Na; Shao, Yi-zhe

    2017-07-01

    To solve the problem that the accurate information of hand cannot be obtained by particle filter, a hand tracking algorithm based on particle filter combined with skin-color adaptive gradient vector flow (GVF) snake model is proposed. Adaptive GVF and skin color adaptive external guidance force are introduced to the traditional GVF snake model, guiding the curve to quickly converge to the deep concave region of hand contour and obtaining the complex hand contour accurately. This algorithm realizes a real-time correction of the particle filter parameters, avoiding the particle drift phenomenon. Experimental results show that the proposed algorithm can reduce the root mean square error of the hand tracking by 53%, and improve the accuracy of hand tracking in the case of complex and moving background, even with a large range of occlusion.

  18. A Universal High-Performance Correlation Analysis Detection Model and Algorithm for Network Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Hongliang Zhu

    2017-01-01

    Full Text Available In big data era, the single detection techniques have already not met the demand of complex network attacks and advanced persistent threats, but there is no uniform standard to make different correlation analysis detection be performed efficiently and accurately. In this paper, we put forward a universal correlation analysis detection model and algorithm by introducing state transition diagram. Based on analyzing and comparing the current correlation detection modes, we formalize the correlation patterns and propose a framework according to data packet timing and behavior qualities and then design a new universal algorithm to implement the method. Finally, experiment, which sets up a lightweight intrusion detection system using KDD1999 dataset, shows that the correlation detection model and algorithm can improve the performance and guarantee high detection rates.

  19. Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models

    Directory of Open Access Journals (Sweden)

    Hiroki Yoshioka

    2011-07-01

    Full Text Available The fraction of vegetation cover (FVC is often estimated by unmixing a linear mixture model (LMM to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could be used to assess the robustness of LMM-based algorithms under a two-endmember assumption. The factor was derived from the analytical relationship between FVC values determined according to several previously described algorithms. The factor depended on the target spectra, endmember spectra, and choice of the spectral vegetation index. Numerical simulations were conducted to demonstrate the dependence and usefulness of the technique in terms of robustness against the measurement noise.

  20. Parameter Identification of the 2-Chlorophenol Oxidation Model Using Improved Differential Search Algorithm

    Directory of Open Access Journals (Sweden)

    Guang-zhou Chen

    2015-01-01

    Full Text Available Parameter identification plays a crucial role for simulating and using model. This paper firstly carried out the sensitivity analysis of the 2-chlorophenol oxidation model in supercritical water using the Monte Carlo method. Then, to address the nonlinearity of the model, two improved differential search (DS algorithms were proposed to carry out the parameter identification of the model. One strategy is to adopt the Latin hypercube sampling method to replace the uniform distribution of initial population; the other is to combine DS with simplex method. The results of sensitivity analysis reveal the sensitivity and the degree of difficulty identified for every model parameter. Furthermore, the posteriori probability distribution of parameters and the collaborative relationship between any two parameters can be obtained. To verify the effectiveness of the improved algorithms, the optimization performance of improved DS in kinetic parameter estimation is studied and compared with that of the basic DS algorithm, differential evolution, artificial bee colony optimization, and quantum-behaved particle swarm optimization. And the experimental results demonstrate that the DS with the Latin hypercube sampling method does not present better performance, while the hybrid methods have the advantages of strong global search ability and local search ability and are more effective than the other algorithms.

  1. Generalized linear model for mapping discrete trait loci implemented with LASSO algorithm.

    Directory of Open Access Journals (Sweden)

    Jun Xing

    Full Text Available Generalized estimating equation (GEE algorithm under a heterogeneous residual variance model is an extension of the iteratively reweighted least squares (IRLS method for continuous traits to discrete traits. In contrast to mixture model-based expectation-maximization (EM algorithm, the GEE algorithm can well detect quantitative trait locus (QTL, especially large effect QTLs located in large marker intervals in the manner of high computing speed. Based on a single QTL model, however, the GEE algorithm has very limited statistical power to detect multiple QTLs because of ignoring other linked QTLs. In this study, the fast least absolute shrinkage and selection operator (LASSO is derived for generalized linear model (GLM with all possible link functions. Under a heterogeneous residual variance model, the LASSO for GLM is used to iteratively estimate the non-zero genetic effects of those loci over entire genome. The iteratively reweighted LASSO is therefore extended to mapping QTL for discrete traits, such as ordinal, binary, and Poisson traits. The simulated and real data analyses are conducted to demonstrate the efficiency of the proposed method to simultaneously identify multiple QTLs for binary and Poisson traits as examples.

  2. Generalized linear model for mapping discrete trait loci implemented with LASSO algorithm.

    Science.gov (United States)

    Xing, Jun; Gao, Huijiang; Wu, Yang; Wu, Yani; Li, Hongwang; Yang, Runqing

    2014-01-01

    Generalized estimating equation (GEE) algorithm under a heterogeneous residual variance model is an extension of the iteratively reweighted least squares (IRLS) method for continuous traits to discrete traits. In contrast to mixture model-based expectation-maximization (EM) algorithm, the GEE algorithm can well detect quantitative trait locus (QTL), especially large effect QTLs located in large marker intervals in the manner of high computing speed. Based on a single QTL model, however, the GEE algorithm has very limited statistical power to detect multiple QTLs because of ignoring other linked QTLs. In this study, the fast least absolute shrinkage and selection operator (LASSO) is derived for generalized linear model (GLM) with all possible link functions. Under a heterogeneous residual variance model, the LASSO for GLM is used to iteratively estimate the non-zero genetic effects of those loci over entire genome. The iteratively reweighted LASSO is therefore extended to mapping QTL for discrete traits, such as ordinal, binary, and Poisson traits. The simulated and real data analyses are conducted to demonstrate the efficiency of the proposed method to simultaneously identify multiple QTLs for binary and Poisson traits as examples.

  3. Modelling Kara Sea phytoplankton primary production: Development and skill assessment of regional algorithms

    Science.gov (United States)

    Demidov, Andrey B.; Kopelevich, Oleg V.; Mosharov, Sergey A.; Sheberstov, Sergey V.; Vazyulya, Svetlana V.

    2017-07-01

    Empirical region-specific (RSM), depth-integrated (DIM) and depth-resolved (DRM) primary production models are developed based on data from the Kara Sea during the autumn (September-October 1993, 2007, 2011). The model is validated by using field and satellite (MODIS-Aqua) observations. Our findings suggest that RSM algorithms perform better than non-region-specific algorithms (NRSM) in terms of regression analysis, root-mean-square difference (RMSD) and model efficiency. In general, the RSM and NRSM underestimate or overestimate the in situ water column integrated primary production (IPP) by a factor of 2 and 2.8, respectively. Additionally, our results suggest that the model skill of the RSM increases when the chlorophyll specific carbon fixation rate, efficiency of photosynthesis and photosynthetically available radiation (PAR) are used as input variables. The parameterization of chlorophyll (chl a) vertical profiles is performed in Kara Sea waters with different trophic statuses. Model validation with field data suggests that the DIM and DRM algorithms perform equally (RMSD of 0.29 and 0.31, respectively). No changes in the performance of the DIM and DRM algorithms are observed (RMSD of 0.30 and 0.31, respectively) when satellite-derived chl a, PAR and the diffuse attenuation coefficient (Kd) are applied as input variables.

  4. A Regression Algorithm for Model Reduction of Large-Scale Multi-Dimensional Problems

    Science.gov (United States)

    Rasekh, Ehsan

    2011-11-01

    Model reduction is an approach for fast and cost-efficient modelling of large-scale systems governed by Ordinary Differential Equations (ODEs). Multi-dimensional model reduction has been suggested for reduction of the linear systems simultaneously with respect to frequency and any other parameter of interest. Multi-dimensional model reduction is also used to reduce the weakly nonlinear systems based on Volterra theory. Multiple dimensions degrade the efficiency of reduction by increasing the size of the projection matrix. In this paper a new methodology is proposed to efficiently build the reduced model based on regression analysis. A numerical example confirms the validity of the proposed regression algorithm for model reduction.

  5. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    Directory of Open Access Journals (Sweden)

    Keller Alevtina

    2017-01-01

    Full Text Available The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the adequacy of such an algorithm itself allows: evaluating the appropriateness of investments in fixed assets, studying the final financial results of an industrial enterprise, depending on management decisions in the depreciation policy. It is necessary to note that the model in question for the enterprise is always degenerate. It is caused by the presence of zero rows in the matrix of capital expenditures by lines of structural elements unable to generate fixed assets (part of the service units, households, corporate consumers. The paper presents the algorithm for the allocation of depreciation costs for the model. This algorithm was developed by the authors and served as the basis for further development of the flowchart for subsequent implementation with use of software. The construction of such algorithm and its use for dynamic input-output models of industrial enterprises is actualized by international acceptance of the effectiveness of the use of input-output models for national and regional economic systems. This is what allows us to consider that the solutions discussed in the article are of interest to economists of various industrial enterprises.

  6. Four wind speed multi-step forecasting models using extreme learning machines and signal decomposing algorithms

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei

    2015-01-01

    Highlights: • A hybrid architecture is proposed for the wind speed forecasting. • Four algorithms are used for the wind speed multi-scale decomposition. • The extreme learning machines are employed for the wind speed forecasting. • All the proposed hybrid models can generate the accurate results. - Abstract: Realization of accurate wind speed forecasting is important to guarantee the safety of wind power utilization. In this paper, a new hybrid forecasting architecture is proposed to realize the wind speed accurate forecasting. In this architecture, four different hybrid models are presented by combining four signal decomposing algorithms (e.g., Wavelet Decomposition/Wavelet Packet Decomposition/Empirical Mode Decomposition/Fast Ensemble Empirical Mode Decomposition) and Extreme Learning Machines. The originality of the study is to investigate the promoted percentages of the Extreme Learning Machines by those mainstream signal decomposing algorithms in the multiple step wind speed forecasting. The results of two forecasting experiments indicate that: (1) the method of Extreme Learning Machines is suitable for the wind speed forecasting; (2) by utilizing the decomposing algorithms, all the proposed hybrid algorithms have better performance than the single Extreme Learning Machines; (3) in the comparisons of the decomposing algorithms in the proposed hybrid architecture, the Fast Ensemble Empirical Mode Decomposition has the best performance in the three-step forecasting results while the Wavelet Packet Decomposition has the best performance in the one and two step forecasting results. At the same time, the Wavelet Packet Decomposition and the Fast Ensemble Empirical Mode Decomposition are better than the Wavelet Decomposition and the Empirical Mode Decomposition in all the step predictions, respectively; and (4) the proposed algorithms are effective in the wind speed accurate predictions

  7. Heuristic algorithms for feature selection under Bayesian models with block-diagonal covariance structure.

    Science.gov (United States)

    Foroughi Pour, Ali; Dalton, Lori A

    2018-03-21

    Many bioinformatics studies aim to identify markers, or features, that can be used to discriminate between distinct groups. In problems where strong individual markers are not available, or where interactions between gene products are of primary interest, it may be necessary to consider combinations of features as a marker family. To this end, recent work proposes a hierarchical Bayesian framework for feature selection that places a prior on the set of features we wish to select and on the label-conditioned feature distribution. While an analytical posterior under Gaussian models with block covariance structures is available, the optimal feature selection algorithm for this model remains intractable since it requires evaluating the posterior over the space of all possible covariance block structures and feature-block assignments. To address this computational barrier, in prior work we proposed a simple suboptimal algorithm, 2MNC-Robust, with robust performance across the space of block structures. Here, we present three new heuristic feature selection algorithms. The proposed algorithms outperform 2MNC-Robust and many other popular feature selection algorithms on synthetic data. In addition, enrichment analysis on real breast cancer, colon cancer, and Leukemia data indicates they also output many of the genes and pathways linked to the cancers under study. Bayesian feature selection is a promising framework for small-sample high-dimensional data, in particular biomarker discovery applications. When applied to cancer data these algorithms outputted many genes already shown to be involved in cancer as well as potentially new biomarkers. Furthermore, one of the proposed algorithms, SPM, outputs blocks of heavily correlated genes, particularly useful for studying gene interactions and gene networks.

  8. Simulation Modeling of Intelligent Control Algorithms for Constructing Autonomous Power Supply Systems with Improved Energy Efficiency

    Directory of Open Access Journals (Sweden)

    Gimazov Ruslan

    2018-01-01

    Full Text Available The paper considers the issue of supplying autonomous robots by solar batteries. Low efficiency of modern solar batteries is a critical issue for the whole industry of renewable energy. The urgency of solving the problem of improved energy efficiency of solar batteries for supplying the robotic system is linked with the task of maximizing autonomous operation time. Several methods to improve the energy efficiency of solar batteries exist. The use of MPPT charge controller is one these methods. MPPT technology allows increasing the power generated by the solar battery by 15 – 30%. The most common MPPT algorithm is the perturbation and observation algorithm. This algorithm has several disadvantages, such as power fluctuation and the fixed time of the maximum power point tracking. These problems can be solved by using a sufficiently accurate predictive and adaptive algorithm. In order to improve the efficiency of solar batteries, autonomous power supply system was developed, which included an intelligent MPPT charge controller with the fuzzy logic-based perturbation and observation algorithm. To study the implementation of the fuzzy logic apparatus in the MPPT algorithm, in Matlab/Simulink environment, we developed a simulation model of the system, including solar battery, MPPT controller, accumulator and load. Results of the simulation modeling established that the use of MPPT technology had increased energy production by 23%; introduction of the fuzzy logic algorithm to MPPT controller had greatly increased the speed of the maximum power point tracking and neutralized the voltage fluctuations, which in turn reduced the power underproduction by 2%.

  9. A simple algorithm to estimate genetic variance in an animal threshold model using Bayesian inference

    Directory of Open Access Journals (Sweden)

    Heringstad Bjørg

    2010-07-01

    Full Text Available Abstract Background In the genetic analysis of binary traits with one observation per animal, animal threshold models frequently give biased heritability estimates. In some cases, this problem can be circumvented by fitting sire- or sire-dam models. However, these models are not appropriate in cases where individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (covariance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative" or "non-informative" with respect to genetic (covariance components. The "non-informative" individuals are characterized by their Mendelian sampling deviations (deviance from the mid-parent mean being completely confounded with a single residual on the underlying liability scale. For threshold models, residual variance on the underlying scale is not identifiable. Hence, variance of fully confounded Mendelian sampling deviations cannot be identified either, but can be inferred from the between-family variation. In the new algorithm, breeding values are sampled as in a standard animal model using the full relationship matrix, but genetic (covariance components are inferred from the sampled breeding values and relationships between "informative" individuals (usually parents only. The latter is analogous to a sire-dam model (in cases with no individual records on the parents. Results When applied to simulated data sets, the standard animal threshold model failed to produce useful results since samples of genetic variance always drifted towards infinity, while the new algorithm produced proper parameter estimates essentially identical to the results from a sire-dam model (given the fact that no individual records exist for the parents. Furthermore, the new algorithm showed much faster Markov chain mixing properties for genetic parameters (similar to

  10. A Novel OBDD-Based Reliability Evaluation Algorithm for Wireless Sensor Networks on the Multicast Model

    Directory of Open Access Journals (Sweden)

    Zongshuai Yan

    2015-01-01

    Full Text Available The two-terminal reliability calculation for wireless sensor networks (WSNs is a #P-hard problem. The reliability calculation of WSNs on the multicast model provides an even worse combinatorial explosion of node states with respect to the calculation of WSNs on the unicast model; many real WSNs require the multicast model to deliver information. This research first provides a formal definition for the WSN on the multicast model. Next, a symbolic OBDD_Multicast algorithm is proposed to evaluate the reliability of WSNs on the multicast model. Furthermore, our research on OBDD_Multicast construction avoids the problem of invalid expansion, which reduces the number of subnetworks by identifying the redundant paths of two adjacent nodes and s-t unconnected paths. Experiments show that the OBDD_Multicast both reduces the complexity of the WSN reliability analysis and has a lower running time than Xing’s OBDD- (ordered binary decision diagram- based algorithm.

  11. Meta Modelling of Submerged-Arc Welding Design based on Fuzzy Algorithm

    Science.gov (United States)

    Song, Chang-Yong; Park, Jonghwan; Goh, Dugab; Park, Woo-Chang; Lee, Chang-Ha; Kim, Mun Yong; Kang, Jinseo

    2017-12-01

    Fuzzy algorithm based meta-model is proposed for approximating submerged-arc weld design factors such as weld speed and weld output. Orthogonal array design based on the submerged-arc weld numerical analysis is applied to the proposed approach. The nonlinear finite element analysis is carried out to simulate the submerged-arc weld numerical analysis using thermo-mechanical and temperature-dependent material properties for general mild steel. The proposed meta-model based on fuzzy algorithm design is generated with triangle membership functions and fuzzy if-then rules using training data obtained from the Taguchi orthogonal array design data. The aim of proposed approach is to develop a fuzzy meta-model to effectively approximate the optimized submerged-arc weld factors. To validate the meta-model, the results obtained from the fuzzy meta-model are compared to the best cases from the Taguchi orthogonal array.

  12. Development of an Algorithm for Automatic Analysis of the Impedance Spectrum Based on a Measurement Model

    Science.gov (United States)

    Kobayashi, Kiyoshi; Suzuki, Tohru S.

    2018-03-01

    A new algorithm for the automatic estimation of an equivalent circuit and the subsequent parameter optimization is developed by combining the data-mining concept and complex least-squares method. In this algorithm, the program generates an initial equivalent-circuit model based on the sampling data and then attempts to optimize the parameters. The basic hypothesis is that the measured impedance spectrum can be reproduced by the sum of the partial-impedance spectra presented by the resistor, inductor, resistor connected in parallel to a capacitor, and resistor connected in parallel to an inductor. The adequacy of the model is determined by using a simple artificial-intelligence function, which is applied to the output function of the Levenberg-Marquardt module. From the iteration of model modifications, the program finds an adequate equivalent-circuit model without any user input to the equivalent-circuit model.

  13. Assessment of numerical optimization algorithms for the development of molecular models

    Science.gov (United States)

    Hülsmann, Marco; Vrabec, Jadran; Maaß, Astrid; Reith, Dirk

    2010-05-01

    In the pursuit to study the parameterization problem of molecular models with a broad perspective, this paper is focused on an isolated aspect: It is investigated, by which algorithms parameters can be best optimized simultaneously to different types of target data (experimental or theoretical) over a range of temperatures with the lowest number of iteration steps. As an example, nitrogen is regarded, where the intermolecular interactions are well described by the quadrupolar two-center Lennard-Jones model that has four state-independent parameters. The target data comprise experimental values for saturated liquid density, enthalpy of vaporization, and vapor pressure. For the purpose of testing algorithms, molecular simulations are entirely replaced by fit functions of vapor-liquid equilibrium (VLE) properties from the literature to assess efficiently the diverse numerical optimization algorithms investigated, being state-of-the-art gradient-based methods with very good convergency qualities. Additionally, artificial noise was superimposed onto the VLE fit results to evaluate the numerical optimization algorithms so that the calculation of molecular simulation data was mimicked. Large differences in the behavior of the individual optimization algorithms are found and some are identified to be capable to handle noisy function values.

  14. A Centerline Based Model Morphing Algorithm for Patient-Specific Finite Element Modelling of the Left Ventricle.

    Science.gov (United States)

    Behdadfar, S; Navarro, L; Sundnes, J; Maleckar, M; Ross, S; Odland, H H; Avril, S

    2017-09-20

    Hexahedral automatic model generation is a recurrent problem in computer vision and computational biomechanics. It may even become a challenging problem when one wants to develop a patient-specific finite-element (FE) model of the left ventricle (LV), particularly when only low resolution images are available. In the present study, a fast and efficient algorithm is presented and tested to address such a situation. A template FE hexahedral model was created for a LV geometry using a General Electric (GE) ultrasound (US) system. A system of centerline was considered for this LV mesh. Then, the nodes located over the endocardial and epicardial surfaces are respectively projected from this centerline onto the actual endocardial and epicardial surfaces reconstructed from a patient's US data. Finally, the position of the internal nodes is derived by finding the deformations with minimal elastic energy. This approach was applied to eight patients suffering from congestive heart disease. A FE analysis was performed to derive the stress induced in the LV tissue by diastolic blood pressure on each of them. Our model morphing algorithm was applied successfully and the obtained meshes showed only marginal mismatches when compared to the corresponding US geometries. The diastolic FE analyses were successfully performed in seven patients to derive the distribution of principal stresses. The original model morphing algorithm is fast and robust with low computational cost. This low cost model morphing algorithm may be highly beneficial for future patient-specific reduced-order modelling of the LV with potential application to other crucial organs.

  15. Optimizing simulated fertilizer additions using a genetic algorithm with a nutrient uptake model

    Science.gov (United States)

    Wendell P. Cropper; N.B. Comerford

    2005-01-01

    Intensive management of pine plantations in the southeastern coastal plain typically involves weed and pest control, and the addition of fertilizer to meet the high nutrient demand of rapidly growing pines. In this study we coupled a mechanistic nutrient uptake model (SSAND, soil supply and nutrient demand) with a genetic algorithm (GA) in order to estimate the minimum...

  16. The Statistical Analysis of General Processing Tree Models with the EM Algorithm.

    Science.gov (United States)

    Hu, Xiangen; Batchelder, William H.

    1994-01-01

    The statistical analysis of processing tree models is advanced by showing how the parameters of estimation and hypothesis testing, based on the likelihood functions, can be accomplished by adapting the expectation-maximization (EM) algorithm. The adaptation makes it easy to program a personal computer to accomplish the stages of statistical…

  17. A 3D Printing Model Watermarking Algorithm Based on 3D Slicing and Feature Points

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the increase of three-dimensional (3D printing applications in many areas of life, a large amount of 3D printing data is copied, shared, and used several times without any permission from the original providers. Therefore, copyright protection and ownership identification for 3D printing data in communications or commercial transactions are practical issues. This paper presents a novel watermarking algorithm for 3D printing models based on embedding watermark data into the feature points of a 3D printing model. Feature points are determined and computed by the 3D slicing process along the Z axis of a 3D printing model. The watermark data is embedded into a feature point of a 3D printing model by changing the vector length of the feature point in OXY space based on the reference length. The x and y coordinates of the feature point will be then changed according to the changed vector length that has been embedded with a watermark. Experimental results verified that the proposed algorithm is invisible and robust to geometric attacks, such as rotation, scaling, and translation. The proposed algorithm provides a better method than the conventional works, and the accuracy of the proposed algorithm is much higher than previous methods.

  18. The design of control algorithm for automatic start-up model of HWRR

    International Nuclear Information System (INIS)

    Guo Wenqi

    1990-01-01

    The design of control algorithm for automatic start-up model of HWRR (Heavy Water Research Reactor), the calculation of μ value and the application of digital compensator are described. Finally The flow diagram of the automatic start-up and digital compensator program for HWRR are given

  19. Tracking Problem Solving by Multivariate Pattern Analysis and Hidden Markov Model Algorithms

    Science.gov (United States)

    Anderson, John R.

    2012-01-01

    Multivariate pattern analysis can be combined with Hidden Markov Model algorithms to track the second-by-second thinking as people solve complex problems. Two applications of this methodology are illustrated with a data set taken from children as they interacted with an intelligent tutoring system for algebra. The first "mind reading" application…

  20. Application of stochastic weighted algorithms to a multidimensional silica particle model

    Energy Technology Data Exchange (ETDEWEB)

    Menz, William J. [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom); Patterson, Robert I.A.; Wagner, Wolfgang [Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstrasse 39, Berlin 10117 (Germany); Kraft, Markus, E-mail: mk306@cam.ac.uk [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom)

    2013-09-01

    Highlights: •Stochastic weighted algorithms (SWAs) are developed for a detailed silica model. •An implementation of SWAs with the transition kernel is presented. •The SWAs’ solutions converge to the direct simulation algorithm’s (DSA) solution. •The efficiency of SWAs is evaluated for this multidimensional particle model. •It is shown that SWAs can be used for coagulation problems in industrial systems. -- Abstract: This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associated majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83–98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.

  1. Decentralized Fuzzy P-hub Centre Problem: Extended Model and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Sara Mousavinia

    2017-02-01

    Full Text Available This paper studies the uncapacitated P-hub center problem in a network under decentralized management assuming time as a fuzzy variable. In this network, transport companies act independently, each company makes its route choices according to its own criteria. In this model, time is presented by triangular fuzzy number and used to calculate the fraction of users that probably choose hub routes instead of direct routes. To solve the problem, two genetic algorithms are proposed. The computational results compared with LINGO indicate that the proposed algorithm solves large-scale instances within promising computational time and outperforms LINGO in terms of solution quality.

  2. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  3. Extraction of battery parameters of the equivalent circuit model using a multi-objective genetic algorithm

    Science.gov (United States)

    Brand, Jonathan; Zhang, Zheming; Agarwal, Ramesh K.

    2014-02-01

    A simple but reasonably accurate battery model is required for simulating the performance of electrical systems that employ a battery for example an electric vehicle, as well as for investigating their potential as an energy storage device. In this paper, a relatively simple equivalent circuit based model is employed for modeling the performance of a battery. A computer code utilizing a multi-objective genetic algorithm is developed for the purpose of extracting the battery performance parameters. The code is applied to several existing industrial batteries as well as to two recently proposed high performance batteries which are currently in early research and development stage. The results demonstrate that with the optimally extracted performance parameters, the equivalent circuit based battery model can accurately predict the performance of various batteries of different sizes, capacities, and materials. Several test cases demonstrate that the multi-objective genetic algorithm can serve as a robust and reliable tool for extracting the battery performance parameters.

  4. Synthetic Optimization Model and Algorithm for Railway Freight Center Station Location and Wagon Flow Organization Problem

    Directory of Open Access Journals (Sweden)

    Xing-cai Liu

    2014-01-01

    Full Text Available The railway freight center stations location and wagon flow organization in railway transport are interconnected, and each of them is complicated in a large-scale rail network. In this paper, a two-stage method is proposed to optimize railway freight center stations location and wagon flow organization together. The location model is present with the objective to minimize the operation cost and fixed construction cost. Then, the second model of wagon flow organization is proposed to decide the optimal train service between different freight center stations. The location of the stations is the output of the first model. A heuristic algorithm that combined tabu search (TS with adaptive clonal selection algorithm (ACSA is proposed to solve those two models. The numerical results show the proposed solution method is effective.

  5. The Bilevel Design Problem for Communication Networks on Trains: Model, Algorithm, and Verification

    Directory of Open Access Journals (Sweden)

    Yin Tian

    2014-01-01

    Full Text Available This paper proposes a novel method to solve the problem of train communication network design. Firstly, we put forward a general description of such problem. Then, taking advantage of the bilevel programming theory, we created the cost-reliability-delay model (CRD model that consisted of two parts: the physical topology part aimed at obtaining the networks with the maximum reliability under constrained cost, while the logical topology part focused on the communication paths yielding minimum delay based on the physical topology delivered from upper level. We also suggested a method to solve the CRD model, which combined the genetic algorithm and the Floyd-Warshall algorithm. Finally, we used a practical example to verify the accuracy and the effectiveness of the CRD model and further applied the novel method on a train with six carriages.

  6. Transmission network expansion planning based on hybridization model of neural networks and harmony search algorithm

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Ameli

    2012-01-01

    Full Text Available Transmission Network Expansion Planning (TNEP is a basic part of power network planning that determines where, when and how many new transmission lines should be added to the network. So, the TNEP is an optimization problem in which the expansion purposes are optimized. Artificial Intelligence (AI tools such as Genetic Algorithm (GA, Simulated Annealing (SA, Tabu Search (TS and Artificial Neural Networks (ANNs are methods used for solving the TNEP problem. Today, by using the hybridization models of AI tools, we can solve the TNEP problem for large-scale systems, which shows the effectiveness of utilizing such models. In this paper, a new approach to the hybridization model of Probabilistic Neural Networks (PNNs and Harmony Search Algorithm (HSA was used to solve the TNEP problem. Finally, by considering the uncertain role of the load based on a scenario technique, this proposed model was tested on the Garver’s 6-bus network.

  7. Structural assessment of aerospace components using image processing algorithms and Finite Element models

    DEFF Research Database (Denmark)

    Stamatelos, Dimtrios; Kappatos, Vassilios

    2017-01-01

    Purpose – This paper presents the development of an advanced structural assessment approach for aerospace components (metallic and composites). This work focuses on developing an automatic image processing methodology based on Non Destructive Testing (NDT) data and numerical models, for predicting...... the residual strength of these components. Design/methodology/approach – An image processing algorithm, based on the threshold method, has been developed to process and quantify the geometric characteristics of damages. Then, a parametric Finite Element (FE) model of the damaged component is developed based...... on the inputs acquired from the image processing algorithm. The analysis of the metallic structures is employing the Extended FE Method (XFEM), while for the composite structures the Cohesive Zone Model (CZM) technique with Progressive Damage Modelling (PDM) is used. Findings – The numerical analyses...

  8. Nonequilibrium behaviors of the three-dimensional Heisenberg model in the Swendsen-Wang algorithm

    Science.gov (United States)

    Nonomura, Yoshihiko; Tomita, Yusuke

    2016-01-01

    Recently, it was shown [Y. Nonomura, J. Phys. Soc. Jpn. 83, 113001 (2014), 10.7566/JPSJ.83.113001] that the nonequilibrium critical relaxation of the two-dimensional (2D) Ising model from a perfectly ordered state in the Wolff algorithm is described by stretched-exponential decay, and a universal scaling scheme was found to connect nonequilibrium and equilibrium behaviors. In the present study we extend these findings to vector spin models, and the 3D Heisenberg model could be a typical example. To evaluate the critical temperature and critical exponents precisely using the above scaling scheme, we calculate nonequilibrium ordering from the perfectly disordered state in the Swendsen-Wang algorithm, and we find that the critical ordering process is described by stretched-exponential growth with a comparable exponent to that of the 3D X Y model. The critical exponents evaluated in the present study are consistent with those in previous studies.

  9. A discrete force allocation algorithm for modelling wind turbines in computational fluid dynamics

    DEFF Research Database (Denmark)

    Réthoré, Pierre-Elouan; Sørensen, Niels N.

    2012-01-01

    This paper describes an algorithm for allocating discrete forces in computational fluid dynamics (CFD). Discrete forces are useful in wind energy CFD. They are used as an approximation of the wind turbine blades’ action on the wind (actuator disc/line), to model forests and to model turbulent inf...... applicable in other fields of CFD that use discrete body forces. Copyright © 2011 John Wiley & Sons, Ltd....... inflows. Many CFD codes are designed with collocated variables layout. Although this approach has many attractive features, it can generate a numerical decoupling between the pressure and the velocities. This issue is addressed by the Rhie–Chow control volume momentum interpolation. However......This paper describes an algorithm for allocating discrete forces in computational fluid dynamics (CFD). Discrete forces are useful in wind energy CFD. They are used as an approximation of the wind turbine blades’ action on the wind (actuator disc/line), to model forests and to model turbulent...

  10. [A Hyperspectral Imagery Anomaly Detection Algorithm Based on Gauss-Markov Model].

    Science.gov (United States)

    Gao, Kun; Liu, Ying; Wang, Li-jing; Zhu, Zhen-yu; Cheng, Hao-bo

    2015-10-01

    With the development of spectral imaging technology, hyperspectral anomaly detection is getting more and more widely used in remote sensing imagery processing. The traditional RX anomaly detection algorithm neglects spatial correlation of images. Besides, it does not validly reduce the data dimension, which costs too much processing time and shows low validity on hyperspectral data. The hyperspectral images follow Gauss-Markov Random Field (GMRF) in space and spectral dimensions. The inverse matrix of covariance matrix is able to be directly calculated by building the Gauss-Markov parameters, which avoids the huge calculation of hyperspectral data. This paper proposes an improved RX anomaly detection algorithm based on three-dimensional GMRF. The hyperspectral imagery data is simulated with GMRF model, and the GMRF parameters are estimated with the Approximated Maximum Likelihood method. The detection operator is constructed with GMRF estimation parameters. The detecting pixel is considered as the centre in a local optimization window, which calls GMRF detecting window. The abnormal degree is calculated with mean vector and covariance inverse matrix, and the mean vector and covariance inverse matrix are calculated within the window. The image is detected pixel by pixel with the moving of GMRF window. The traditional RX detection algorithm, the regional hypothesis detection algorithm based on GMRF and the algorithm proposed in this paper are simulated with AVIRIS hyperspectral data. Simulation results show that the proposed anomaly detection method is able to improve the detection efficiency and reduce false alarm rate. We get the operation time statistics of the three algorithms in the same computer environment. The results show that the proposed algorithm improves the operation time by 45.2%, which shows good computing efficiency.

  11. Integral equation models for image restoration: high accuracy methods and fast algorithms

    International Nuclear Information System (INIS)

    Lu, Yao; Shen, Lixin; Xu, Yuesheng

    2010-01-01

    Discrete models are consistently used as practical models for image restoration. They are piecewise constant approximations of true physical (continuous) models, and hence, inevitably impose bottleneck model errors. We propose to work directly with continuous models for image restoration aiming at suppressing the model errors caused by the discrete models. A systematic study is conducted in this paper for the continuous out-of-focus image models which can be formulated as an integral equation of the first kind. The resulting integral equation is regularized by the Lavrentiev method and the Tikhonov method. We develop fast multiscale algorithms having high accuracy to solve the regularized integral equations of the second kind. Numerical experiments show that the methods based on the continuous model perform much better than those based on discrete models, in terms of PSNR values and visual quality of the reconstructed images

  12. New Multi-objective Uncertainty-based Algorithm for Water Resource Models' Calibration

    Science.gov (United States)

    Keshavarz, Kasra; Alizadeh, Hossein

    2017-04-01

    Water resource models are powerful tools to support water management decision making process and are developed to deal with a broad range of issues including land use and climate change impacts analysis, water allocation, systems design and operation, waste load control and allocation, etc. These models are divided into two categories of simulation and optimization models whose calibration has been addressed in the literature where great relevant efforts in recent decades have led to two main categories of auto-calibration methods of uncertainty-based algorithms such as GLUE, MCMC and PEST and optimization-based algorithms including single-objective optimization such as SCE-UA and multi-objective optimization such as MOCOM-UA and MOSCEM-UA. Although algorithms which benefit from capabilities of both types, such as SUFI-2, were rather developed, this paper proposes a new auto-calibration algorithm which is capable of both finding optimal parameters values regarding multiple objectives like optimization-based algorithms and providing interval estimations of parameters like uncertainty-based algorithms. The algorithm is actually developed to improve quality of SUFI-2 results. Based on a single-objective, e.g. NSE and RMSE, SUFI-2 proposes a routine to find the best point and interval estimation of parameters and corresponding prediction intervals (95 PPU) of time series of interest. To assess the goodness of calibration, final results are presented using two uncertainty measures of p-factor quantifying percentage of observations covered by 95PPU and r-factor quantifying degree of uncertainty, and the analyst has to select the point and interval estimation of parameters which are actually non-dominated regarding both of the uncertainty measures. Based on the described properties of SUFI-2, two important questions are raised, answering of which are our research motivation: Given that in SUFI-2, final selection is based on the two measures or objectives and on the other

  13. Sparse representation, modeling and learning in visual recognition theory, algorithms and applications

    CERN Document Server

    Cheng, Hong

    2015-01-01

    This unique text/reference presents a comprehensive review of the state of the art in sparse representations, modeling and learning. The book examines both the theoretical foundations and details of algorithm implementation, highlighting the practical application of compressed sensing research in visual recognition and computer vision. Topics and features: provides a thorough introduction to the fundamentals of sparse representation, modeling and learning, and the application of these techniques in visual recognition; describes sparse recovery approaches, robust and efficient sparse represen

  14. Parameters Calculation of ZnO Surge Arrester Models by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    A. Bayadi

    2006-09-01

    Full Text Available This paper proposes to provide a new technique based on the genetic algorithm to obtain the best possible series of values of the parameters of the ZnO surge arresters models. The validity of the predicted parameters is then checked by comparing the results predicted with the experimental results available in the literature. Using the ATP-EMTP package an application of the arrester model on network system studies is presented and discussed.

  15. An Overview of the Automated Dispatch Controller Algorithms in the System Advisor Model (SAM)

    Energy Technology Data Exchange (ETDEWEB)

    DiOrio, Nicholas A [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-11-22

    Three automatic dispatch modes have been added to the battery model within the System Adviser Model. These controllers have been developed to perform peak shaving in an automated fashion, providing users with a way to see the benefit of reduced demand charges without manually programming a complicated dispatch control. A flexible input option allows more advanced interaction with the automated controller. This document will describe the algorithms in detail and present brief results on its use and limitations.

  16. A sonification algorithm for developing the off-roads models for driving simulators

    Science.gov (United States)

    Chiroiu, Veturia; Brişan, Cornel; Dumitriu, Dan; Munteanu, Ligia

    2018-01-01

    In this paper, a sonification algorithm for developing the off-road models for driving simulators, is proposed. The aim of this algorithm is to overcome difficulties of heuristics identification which are best suited to a particular off-road profile built by measurements. The sonification algorithm is based on the stochastic polynomial chaos analysis suitable in solving equations with random input data. The fluctuations are generated by incomplete measurements leading to inhomogeneities of the cross-sectional curves of off-roads before and after deformation, the unstable contact between the tire and the road and the unreal distribution of contact and friction forces in the unknown contact domains. The approach is exercised on two particular problems and results compare favorably to existing analytical and numerical solutions. The sonification technique represents a useful multiscale analysis able to build a low-cost virtual reality environment with increased degrees of realism for driving simulators and higher user flexibility.

  17. An ensemble based nonlinear orthogonal matching pursuit algorithm for sparse history matching of reservoir models

    KAUST Repository

    Fsheikh, Ahmed H.

    2013-01-01

    A nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of reservoir models is presented. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated components of the basis functions with the residual. The discovered basis (aka support) is augmented across the nonlinear iterations. Once the basis functions are selected from the dictionary, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on approximate gradient estimation using an iterative stochastic ensemble method (ISEM). ISEM utilizes an ensemble of directional derivatives to efficiently approximate gradients. In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm.

  18. Efficient Out of Core Sorting Algorithms for the Parallel Disks Model.

    Science.gov (United States)

    Kundeti, Vamsi; Rajasekaran, Sanguthevar

    2011-11-01

    In this paper we present efficient algorithms for sorting on the Parallel Disks Model (PDM). Numerous asymptotically optimal algorithms have been proposed in the literature. However many of these merge based algorithms have large underlying constants in the time bounds, because they suffer from the lack of read parallelism on PDM. The irregular consumption of the runs during the merge affects the read parallelism and contributes to the increased sorting time. In this paper we first introduce a novel idea called the dirty sequence accumulation that improves the read parallelism. Secondly, we show analytically that this idea can reduce the number of parallel I/O's required to sort the input close to the lower bound of [Formula: see text]. We experimentally verify our dirty sequence idea with the standard R-Way merge and show that our idea can reduce the number of parallel I/Os to sort on PDM significantly.

  19. Real Time Optima Tracking Using Harvesting Models of the Genetic Algorithm

    Science.gov (United States)

    Baskaran, Subbiah; Noever, D.

    1999-01-01

    Tracking optima in real time propulsion control, particularly for non-stationary optimization problems is a challenging task. Several approaches have been put forward for such a study including the numerical method called the genetic algorithm. In brief, this approach is built upon Darwinian-style competition between numerical alternatives displayed in the form of binary strings, or by analogy to 'pseudogenes'. Breeding of improved solution is an often cited parallel to natural selection in.evolutionary or soft computing. In this report we present our results of applying a novel model of a genetic algorithm for tracking optima in propulsion engineering and in real time control. We specialize the algorithm to mission profiling and planning optimizations, both to select reduced propulsion needs through trajectory planning and to explore time or fuel conservation strategies.

  20. Parametrisation of a Maxwell model for transient tyre forces by means of an extended firefly algorithm

    Directory of Open Access Journals (Sweden)

    Andreas Hackl

    2016-12-01

    Full Text Available Developing functions for advanced driver assistance systems requires very accurate tyre models, especially for the simulation of transient conditions. In the past, parametrisation of a given tyre model based on measurement data showed shortcomings, and the globally optimal solution obtained did not appear to be plausible. In this article, an optimisation strategy is presented, which is able to find plausible and physically feasible solutions by detecting many local outcomes. The firefly algorithm mimics the natural behaviour of fireflies, which use a kind of flashing light to communicate with other members. An algorithm simulating the intensity of the light of a single firefly, diminishing with increasing distances, is implicitly able to detect local solutions on its way to the best solution in the search space. This implicit clustering feature is stressed by an additional explicit clustering step, where local solutions are stored and terminally processed to obtain a large number of possible solutions. The enhanced firefly algorithm will be first applied to the well-known Rastrigin functions and then to the tyre parametrisation problem. It is shown that the firefly algorithm is qualified to find a high number of optimisation solutions, which is required for plausible parametrisation for the given tyre model.

  1. A novel vehicle tracking algorithm based on mean shift and active contour model in complex environment

    Science.gov (United States)

    Cai, Lei; Wang, Lin; Li, Bo; Zhang, Libao; Lv, Wen

    2017-06-01

    Vehicle tracking technology is currently one of the most active research topics in machine vision. It is an important part of intelligent transportation system. However, in theory and technology, it still faces many challenges including real-time and robustness. In video surveillance, the targets need to be detected in real-time and to be calculated accurate position for judging the motives. The contents of video sequence images and the target motion are complex, so the objects can't be expressed by a unified mathematical model. Object-tracking is defined as locating the interest moving target in each frame of a piece of video. The current tracking technology can achieve reliable results in simple environment over the target with easy identified characteristics. However, in more complex environment, it is easy to lose the target because of the mismatch between the target appearance and its dynamic model. Moreover, the target usually has a complex shape, but the tradition target tracking algorithm usually represents the tracking results by simple geometric such as rectangle or circle, so it cannot provide accurate information for the subsequent upper application. This paper combines a traditional object-tracking technology, Mean-Shift algorithm, with a kind of image segmentation algorithm, Active-Contour model, to get the outlines of objects while the tracking process and automatically handle topology changes. Meanwhile, the outline information is used to aid tracking algorithm to improve it.

  2. THE RECURRENT ALGORITHM FOR INTERFEROMETRIC SIGNALS PROCESSING BASED ON MULTI-CLOUD PREDICTION MODEL

    Directory of Open Access Journals (Sweden)

    I. P. Gurov

    2014-07-01

    Full Text Available The paper deals with modification of the recurrent processing algorithm for discrete sequence of interferometric signal samples. The algorithm is based on subsequent reference signal prediction at specifying a set (“cloud” of values for signal parameters vector by Monte Carlo method, comparison with the measured signal value and usage of the residual for enhancing the values of signal parameters at each discretization step. The concept of multi-cloud prediction model is used in the proposed modified algorithm. A set of normally distributed clouds is created with expectation values selected on the base of criterion of minimum residual between prediction and observation values. Experimental testing of the proposed method applied to estimation of fringe initial phase in the phase shifting interferometry has been conducted. The estimate variance of the signal reconstructed according to estimated initial phase from initial signal does not exceed 2% of the maximum signal value. It has been shown that the proposed algorithm application makes it possible to avoid the 2π-ambiguity and ensure sustainable recovery of interference fringes phase of a complicated type without involving a priori information about interference fringe phase distribution. The usage of the proposed algorithm applied to estimation of interferometric signals parameters gives the possibility for improving the filter stability with respect to influence of random noise and decreasing requirements for accuracy of a priori filtration parameters setting as compared with conventional (single-cloud implementation of the sequential Monte Carlo method.

  3. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning

    Science.gov (United States)

    Fu, QiMing

    2016-01-01

    To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704

  4. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning

    Directory of Open Access Journals (Sweden)

    Shan Zhong

    2016-01-01

    Full Text Available To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with l2-regularization are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR and linear function approximation (LFA, respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency.

  5. Research on the time optimization model algorithm of Customer Collaborative Product Innovation

    Directory of Open Access Journals (Sweden)

    Guodong Yu

    2014-01-01

    Full Text Available Purpose: To improve the efficiency of information sharing among the innovation agents of customer collaborative product innovation and shorten the product design cycle, an improved genetic annealing algorithm of the time optimization was presented. Design/methodology/approach: Based on the analysis of the objective relationship between the design tasks, the paper takes job shop problems for machining model and proposes the improved genetic algorithm to solve the problems, which is based on the niche technology and thus a better product collaborative innovation design time schedule is got to improve the efficiency. Finally, through the collaborative innovation design of a certain type of mobile phone, the proposed model and method were verified to be correct and effective. Findings and Originality/value: An algorithm with obvious advantages in terms of searching capability and optimization efficiency of customer collaborative product innovation was proposed. According to the defects of the traditional genetic annealing algorithm, the niche genetic annealing algorithm was presented. Firstly, it avoided the effective gene deletions at the early search stage and guaranteed the diversity of solution; Secondly, adaptive double point crossover and swap mutation strategy were introduced to overcome the defects of long solving process and easily converging local minimum value due to the fixed crossover and mutation probability; Thirdly, elite reserved strategy was imported that optimal solution missing was avoided effectively and evolution speed was accelerated. Originality/value: Firstly, the improved genetic simulated annealing algorithm overcomes some defects such as effective gene easily lost in early search. It is helpful to shorten the calculation process and improve the accuracy of the convergence value. Moreover, it speeds up the evolution and ensures the reliability of the optimal solution. Meanwhile, it has obvious advantages in efficiency of

  6. Integrative multicellular biological modeling: a case study of 3D epidermal development using GPU algorithms

    Directory of Open Access Journals (Sweden)

    Christley Scott

    2010-08-01

    Full Text Available Abstract Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a

  7. A combined model based on CEEMDAN and modified flower pollination algorithm for wind speed forecasting

    International Nuclear Information System (INIS)

    Zhang, Wenyu; Qu, Zongxi; Zhang, Kequan; Mao, Wenqian; Ma, Yining; Fan, Xu

    2017-01-01

    Highlights: • A CEEMDAN-CLSFPA combined model is proposed for short-term wind speed forecasting. • The CEEMDAN technique is used to decompose the original wind speed series. • A modified optimization algorithm-CLSFPA is proposed to optimize the weights of the combined model. • The no negative constraint theory is applied to the combined model. • Robustness of the proposed model is validated by data sampled from four different wind farms. - Abstract: Wind energy, which is stochastic and intermittent by nature, has a significant influence on power system operation, power grid security and market economics. Precise and reliable wind speed prediction is vital for wind farm planning and operational planning for power grids. To improve wind speed forecasting accuracy, a large number of forecasting approaches have been proposed; however, these models typically do not account for the importance of data preprocessing and are limited by the use of individual models. In this paper, a novel combined model – combining complete ensemble empirical mode decomposition adaptive noise (CEEMDAN), flower pollination algorithm with chaotic local search (CLSFPA), five neural networks and no negative constraint theory (NNCT) – is proposed for short-term wind speed forecasting. First, a recent CEEMDAN is employed to divide the original wind speed data into a finite set of IMF components, and then a combined model, based on NNCT, is proposed for forecasting each decomposition signal. To improve the forecasting capacity of the combined model, a modified flower pollination algorithm (FPA) with chaotic local search (CLS) is proposed and employed to determine the optimal weight coefficients of the combined model, and the final prediction values were obtained by reconstructing the refined series. To evaluate the forecasting ability of the proposed combined model, 15-min wind speed data from four wind farms in the eastern coastal areas of China are used. The experimental results of

  8. Financial Time Series Modelling with Hybrid Model Based on Customized RBF Neural Network Combined With Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Lukas Falat

    2014-01-01

    Full Text Available In this paper, authors apply feed-forward artificial neural network (ANN of RBF type into the process of modelling and forecasting the future value of USD/CAD time series. Authors test the customized version of the RBF and add the evolutionary approach into it. They also combine the standard algorithm for adapting weights in neural network with an unsupervised clustering algorithm called K-means. Finally, authors suggest the new hybrid model as a combination of a standard ANN and a moving average for error modeling that is used to enhance the outputs of the network using the error part of the original RBF. Using high-frequency data, they examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, authors perform the comparative out-of-sample analysis of the suggested hybrid model with statistical models and the standard neural network.

  9. The production-distribution problem with order acceptance and package delivery: models and algorithm

    Directory of Open Access Journals (Sweden)

    Khalili Majid

    2016-01-01

    Full Text Available The production planning and distribution are among the most important decisions in the supply chain. Classically, in this problem, it is assumed that all orders have to produced and separately delivered; while, in practice, an order may be rejected if the cost that it brings to the supply chain exceeds its revenue. Moreover, orders can be delivered in a batch to reduce the related costs. This paper considers the production planning and distribution problem with order acceptance and package delivery to maximize the profit. At first, a new mathematical model based on mixed integer linear programming is developed. Using commercial optimization software, the model can optimally solve small or even medium sized instances. For large instances, a solution method, based on imperialist competitive algorithms, is also proposed. Using numerical experiments, the proposed model and algorithm are evaluated.

  10. Optimizing models for production and inventory control using a genetic algorithm

    Directory of Open Access Journals (Sweden)

    Dragan S. Pamučar

    2012-01-01

    Full Text Available In order to make the Economic Production Quantity (EPQ model more applicable to real-world production and inventory control problems, in this paper we expand this model by assuming that some imperfect items of different product types being produced such as reworks are allowed. In addition, we may have more than one product and supplier along with warehouse space and budget limitation. We show that the model of the problem is a constrained non-linear integer program and propose a genetic algorithm to solve it. Moreover, a design of experiments is employed to calibrate the parameters of the algorithm for different problem sizes. In the end, a numerical example is presented to demonstrate the application of the proposed methodology.

  11. Thin-Sheet Inversion Modeling of Geomagnetic Deep Sounding Data Using MCMC Algorithm

    Directory of Open Access Journals (Sweden)

    Hendra Grandis

    2013-01-01

    Full Text Available The geomagnetic deep sounding (GDS method is one of electromagnetic (EM methods in geophysics that allows the estimation of the subsurface electrical conductivity distribution. This paper presents the inversion modeling of GDS data employing Markov Chain Monte Carlo (MCMC algorithm to evaluate the marginal posterior probability of the model parameters. We used thin-sheet model to represent quasi-3D conductivity variations in the heterogeneous subsurface. The algorithm was applied to invert field GDS data from the zone covering an area that spans from eastern margin of the Bohemian Massif to the West Carpathians in Europe. Conductivity anomalies obtained from this study confirm the well-known large-scale tectonic setting of the area.

  12. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.

    2013-12-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.

  13. Implementation and testing of a simple data assimilation algorithm in the regional air pollution forecast model, DEOM

    DEFF Research Database (Denmark)

    Frydendall, Jan; Brandt, J.; Christensen, J. H.

    2009-01-01

    A simple data assimilation algorithm based on statistical interpolation has been developed and coupled to a long-range chemistry transport model, the Danish Eulerian Operational Model (DEOM), applied for air pollution forecasting at the National Environmental Research Institute (NERI), Denmark....... In this paper, the algorithm and the results from experiments designed to find the optimal setup of the algorithm are described. The algorithm has been developed and optimized via eight different experiments where the results from different model setups have been tested against measurements from the EMEP...... configuration of the data assimilation algorithm, were found. The data assimilation algorithm will in the future be used in the operational THOR integrated air pollution forecast system, which includes the DEOM....

  14. A new model and simple algorithms for multi-label mumford-shah problems

    KAUST Repository

    Hong, Byungwoo

    2013-06-01

    In this work, we address the multi-label Mumford-Shah problem, i.e., the problem of jointly estimating a partitioning of the domain of the image, and functions defined within regions of the partition. We create algorithms that are efficient, robust to undesirable local minima, and are easy-to-implement. Our algorithms are formulated by slightly modifying the underlying statistical model from which the multi-label Mumford-Shah functional is derived. The advantage of this statistical model is that the underlying variables: the labels and the functions are less coupled than in the original formulation, and the labels can be computed from the functions with more global updates. The resulting algorithms can be tuned to the desired level of locality of the solution: from fully global updates to more local updates. We demonstrate our algorithm on two applications: joint multi-label segmentation and denoising, and joint multi-label motion segmentation and flow estimation. We compare to the state-of-the-art in multi-label Mumford-Shah problems and show that we achieve more promising results. © 2013 IEEE.

  15. A parallel domain decomposition algorithm for coastal ocean circulation models based on integer linear programming

    Science.gov (United States)

    Jordi, Antoni; Georgas, Nickitas; Blumberg, Alan

    2017-05-01

    This paper presents a new parallel domain decomposition algorithm based on integer linear programming (ILP), a mathematical optimization method. To minimize the computation time of coastal ocean circulation models, the ILP decomposition algorithm divides the global domain in local domains with balanced work load according to the number of processors and avoids computations over as many as land grid cells as possible. In addition, it maintains the use of logically rectangular local domains and achieves the exact same results as traditional domain decomposition algorithms (such as Cartesian decomposition). However, the ILP decomposition algorithm may not converge to an exact solution for relatively large domains. To overcome this problem, we developed two ILP decomposition formulations. The first one (complete formulation) has no additional restriction, although it is impractical for large global domains. The second one (feasible) imposes local domains with the same dimensions and looks for the feasibility of such decomposition, which allows much larger global domains. Parallel performance of both ILP formulations is compared to a base Cartesian decomposition by simulating two cases with the newly created parallel version of the Stevens Institute of Technology's Estuarine and Coastal Ocean Model (sECOM). Simulations with the ILP formulations run always faster than the ones with the base decomposition, and the complete formulation is better than the feasible one when it is applicable. In addition, parallel efficiency with the ILP decomposition may be greater than one.

  16. An Algorithm for Modified Times Series Analysis Method for Modeling and Prognosis of the River Water Quality

    Directory of Open Access Journals (Sweden)

    Petrov M.

    2007-12-01

    Full Text Available An algorithm and programs for modeling, analysis, and prognosis of river quality has been developed, which is a modified method of the time series analysis (TSA. The algorithm and program are used for modeling and prognosis of the river quality of Bulgarian river ecosystems.

  17. Risk adjustment model of credit life insurance using a genetic algorithm

    Science.gov (United States)

    Saputra, A.; Sukono; Rusyaman, E.

    2018-03-01

    In managing the risk of credit life insurance, insurance company should acknowledge the character of the risks to predict future losses. Risk characteristics can be learned in a claim distribution model. There are two standard approaches in designing the distribution model of claims over the insurance period i.e, collective risk model and individual risk model. In the collective risk model, the claim arises when risk occurs is called individual claim, accumulation of individual claim during a period of insurance is called an aggregate claim. The aggregate claim model may be formed by large model and a number of individual claims. How the measurement of insurance risk with the premium model approach and whether this approach is appropriate for estimating the potential losses occur in the future. In order to solve the problem Genetic Algorithm with Roulette Wheel Selection is used.

  18. Efficient parameterization of cardiac action potential models using a genetic algorithm.

    Science.gov (United States)

    Cairns, Darby I; Fenton, Flavio H; Cherry, E M

    2017-09-01

    Finding appropriate values for parameters in mathematical models of cardiac cells is a challenging task. Here, we show that it is possible to obtain good parameterizations in as little as 30-40 s when as many as 27 parameters are fit simultaneously using a genetic algorithm and two flexible phenomenological models of cardiac action potentials. We demonstrate how our implementation works by considering cases of "model recovery" in which we attempt to find parameter values that match model-derived action potential data from several cycle lengths. We assess performance by evaluating the parameter values obtained, action potentials at fit and non-fit cycle lengths, and bifurcation plots for fidelity to the truth as well as consistency across different runs of the algorithm. We also fit the models to action potentials recorded experimentally using microelectrodes and analyze performance. We find that our implementation can efficiently obtain model parameterizations that are in good agreement with the dynamics exhibited by the underlying systems that are included in the fitting process. However, the parameter values obtained in good parameterizations can exhibit a significant amount of variability, raising issues of parameter identifiability and sensitivity. Along similar lines, we also find that the two models differ in terms of the ease of obtaining parameterizations that reproduce model dynamics accurately, most likely reflecting different levels of parameter identifiability for the two models.

  19. Efficient parameterization of cardiac action potential models using a genetic algorithm

    Science.gov (United States)

    Cairns, Darby I.; Fenton, Flavio H.; Cherry, E. M.

    2017-09-01

    Finding appropriate values for parameters in mathematical models of cardiac cells is a challenging task. Here, we show that it is possible to obtain good parameterizations in as little as 30-40 s when as many as 27 parameters are fit simultaneously using a genetic algorithm and two flexible phenomenological models of cardiac action potentials. We demonstrate how our implementation works by considering cases of "model recovery" in which we attempt to find parameter values that match model-derived action potential data from several cycle lengths. We assess performance by evaluating the parameter values obtained, action potentials at fit and non-fit cycle lengths, and bifurcation plots for fidelity to the truth as well as consistency across different runs of the algorithm. We also fit the models to action potentials recorded experimentally using microelectrodes and analyze performance. We find that our implementation can efficiently obtain model parameterizations that are in good agreement with the dynamics exhibited by the underlying systems that are included in the fitting process. However, the parameter values obtained in good parameterizations can exhibit a significant amount of variability, raising issues of parameter identifiability and sensitivity. Along similar lines, we also find that the two models differ in terms of the ease of obtaining parameterizations that reproduce model dynamics accurately, most likely reflecting different levels of parameter identifiability for the two models.

  20. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process.

    Science.gov (United States)

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H; Miller, Cass T

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  1. Modelling of Hydrothermal Unit Commitment Coordination Using Efficient Metaheuristic Algorithm: A Hybridized Approach

    Directory of Open Access Journals (Sweden)

    Suman Sutradhar

    2016-01-01

    Full Text Available In this paper, a novel approach of hybridization of two efficient metaheuristic algorithms is proposed for energy system analysis and modelling based on a hydro and thermal based power system in both single and multiobjective environment. The scheduling of hydro and thermal power is modelled descriptively including the handling method of various practical nonlinear constraints. The main goal for the proposed modelling is to minimize the total production cost (which is highly nonlinear and nonconvex problem and emission while satisfying involved hydro and thermal unit commitment limitations. The cascaded hydro reservoirs of hydro subsystem and intertemporal constraints regarding thermal units along with nonlinear nonconvex, mixed-integer mixed-binary objective function make the search space highly complex. To solve such a complicated system, a hybridization of Gray Wolf Optimization and Artificial Bee Colony algorithm, that is, h-ABC/GWO, is used for better exploration and exploitation in the multidimensional search space. Two different test systems are used for modelling and analysis. Experimental results demonstrate the superior performance of the proposed algorithm as compared to other recently reported ones in terms of convergence and better quality of solutions.

  2. Sampling algorithms for validation of supervised learning models for Ising-like systems

    Science.gov (United States)

    Portman, Nataliya; Tamblyn, Isaac

    2017-12-01

    In this paper, we build and explore supervised learning models of ferromagnetic system behavior, using Monte-Carlo sampling of the spin configuration space generated by the 2D Ising model. Given the enormous size of the space of all possible Ising model realizations, the question arises as to how to choose a reasonable number of samples that will form physically meaningful and non-intersecting training and testing datasets. Here, we propose a sampling technique called ;ID-MH; that uses the Metropolis-Hastings algorithm creating Markov process across energy levels within the predefined configuration subspace. We show that application of this method retains phase transitions in both training and testing datasets and serves the purpose of validation of a machine learning algorithm. For larger lattice dimensions, ID-MH is not feasible as it requires knowledge of the complete configuration space. As such, we develop a new ;block-ID; sampling strategy: it decomposes the given structure into square blocks with lattice dimension N ≤ 5 and uses ID-MH sampling of candidate blocks. Further comparison of the performance of commonly used machine learning methods such as random forests, decision trees, k nearest neighbors and artificial neural networks shows that the PCA-based Decision Tree regressor is the most accurate predictor of magnetizations of the Ising model. For energies, however, the accuracy of prediction is not satisfactory, highlighting the need to consider more algorithmically complex methods (e.g., deep learning).

  3. Development of a 3D modeling algorithm for tunnel deformation monitoring based on terrestrial laser scanning

    Directory of Open Access Journals (Sweden)

    Xiongyao Xie

    2017-03-01

    Full Text Available Deformation monitoring is vital for tunnel engineering. Traditional monitoring techniques measure only a few data points, which is insufficient to understand the deformation of the entire tunnel. Terrestrial Laser Scanning (TLS is a newly developed technique that can collect thousands of data points in a few minutes, with promising applications to tunnel deformation monitoring. The raw point cloud collected from TLS cannot display tunnel deformation; therefore, a new 3D modeling algorithm was developed for this purpose. The 3D modeling algorithm includes modules for preprocessing the point cloud, extracting the tunnel axis, performing coordinate transformations, performing noise reduction and generating the 3D model. Measurement results from TLS were compared to the results of total station and numerical simulation, confirming the reliability of TLS for tunnel deformation monitoring. Finally, a case study of the Shanghai West Changjiang Road tunnel is introduced, where TLS was applied to measure shield tunnel deformation over multiple sections. Settlement, segment dislocation and cross section convergence were measured and visualized using the proposed 3D modeling algorithm.

  4. The emission factor of volatile isoprenoids: caveats, model algorithms, response shapes and scaling

    Science.gov (United States)

    Niinemets, Ü.; Monson, R. K.; Arneth, A.; Ciccioli, P.; Kesselmeier, J.; Kuhn, U.; Noe, S. M.; Peñuelas, J.; Staudt, M.

    2010-02-01

    In models of plant volatile isoprenoid emissions, the instantaneous compound emission rate typically scales with the plant's emission capacity under specified environmental conditions, also defined as the emission factor, ES. In the most widely employed plant isoprenoid emission models, the algorithms developed by Guenther and colleagues (1991, 1993), instantaneous variation of the steady-state emission rate is described as the product of ES and light and temperature response functions. When these models are employed in the in atmospheric chemistry modeling community, species-specific ES values and parameter values defining the instantaneous response curves are typically considered as constant. In the current review, we argue that ES is largely a modeling concept, importantly depending on our understanding of which environmental factors affect isoprenoid emissions, and consequently need standardization during ES determination. In particular, there is now increasing consensus that variations in atmospheric CO2 concentration, in addition to variations in light and temperature, need to be included in the emission models. Furthermore, we demonstrate that for less volatile isoprenoids, mono- and sesquiterpenes, the emissions are often jointly controlled by the compound synthesis and volatility, and because of these combined biochemical and physico-chemical properties, specification of ES as a constant value is incapable of describing instantaneous emissions within the sole assumptions of fluctuating light and temperature, as are used in the standard algorithms. The definition of ES also varies depending on the degree of aggregation of ES values in different parameterization schemes (leaf- vs. canopy- or region-level, species vs. plant functional type level), and various aggregated ES schemes are not compatible for different integration models. The summarized information collectively emphasizes the need to update model algorithms by including missing environmental and

  5. The leaf-level emission factor of volatile isoprenoids: caveats, model algorithms, response shapes and scaling

    Science.gov (United States)

    Niinemets, Ü.; Monson, R. K.; Arneth, A.; Ciccioli, P.; Kesselmeier, J.; Kuhn, U.; Noe, S. M.; Peñuelas, J.; Staudt, M.

    2010-06-01

    In models of plant volatile isoprenoid emissions, the instantaneous compound emission rate typically scales with the plant's emission potential under specified environmental conditions, also called as the emission factor, ES. In the most widely employed plant isoprenoid emission models, the algorithms developed by Guenther and colleagues (1991, 1993), instantaneous variation of the steady-state emission rate is described as the product of ES and light and temperature response functions. When these models are employed in the atmospheric chemistry modeling community, species-specific ES values and parameter values defining the instantaneous response curves are often taken as initially defined. In the current review, we argue that ES as a characteristic used in the models importantly depends on our understanding of which environmental factors affect isoprenoid emissions, and consequently need standardization during experimental ES determinations. In particular, there is now increasing consensus that in addition to variations in light and temperature, alterations in atmospheric and/or within-leaf CO2 concentrations may need to be included in the emission models. Furthermore, we demonstrate that for less volatile isoprenoids, mono- and sesquiterpenes, the emissions are often jointly controlled by the compound synthesis and volatility. Because of these combined biochemical and physico-chemical drivers, specification of ES as a constant value is incapable of describing instantaneous emissions within the sole assumptions of fluctuating light and temperature as used in the standard algorithms. The definition of ES also varies depending on the degree of aggregation of ES values in different parameterization schemes (leaf- vs. canopy- or region-scale, species vs. plant functional type levels) and various aggregated ES schemes are not compatible for different integration models. The summarized information collectively emphasizes the need to update model algorithms by including

  6. Research on the Compression Algorithm of the Infrared Thermal Image Sequence Based on Differential Evolution and Double Exponential Decay Model

    Directory of Open Access Journals (Sweden)

    Jin-Yu Zhang

    2014-01-01

    Full Text Available This paper has proposed a new thermal wave image sequence compression algorithm by combining double exponential decay fitting model and differential evolution algorithm. This study benchmarked fitting compression results and precision of the proposed method was benchmarked to that of the traditional methods via experiment; it investigated the fitting compression performance under the long time series and improved model and validated the algorithm by practical thermal image sequence compression and reconstruction. The results show that the proposed algorithm is a fast and highly precise infrared image data processing method.

  7. Research on the Compression Algorithm of the Infrared Thermal Image Sequence Based on Differential Evolution and Double Exponential Decay Model

    Science.gov (United States)

    Zhang, Jin-Yu; Meng, Xiang-Bing; Xu, Wei; Zhang, Wei; Zhang, Yong

    2014-01-01

    This paper has proposed a new thermal wave image sequence compression algorithm by combining double exponential decay fitting model and differential evolution algorithm. This study benchmarked fitting compression results and precision of the proposed method was benchmarked to that of the traditional methods via experiment; it investigated the fitting compression performance under the long time series and improved model and validated the algorithm by practical thermal image sequence compression and reconstruction. The results show that the proposed algorithm is a fast and highly precise infrared image data processing method. PMID:24696649

  8. The effect of monthly 50,000 IU or 100,000 IU vitamin D supplements on vitamin D status in premenopausal Middle Eastern women living in Auckland.

    Science.gov (United States)

    Mazahery, H; Stonehouse, W; von Hurst, P R

    2015-03-01

    Middle Eastern female immigrants are at an increased risk of vitamin D deficiency and their response to prescribed vitamin D dosages may not be adequate and affected by other factors. The objectives were to determine vitamin D deficiency and its determinants in Middle Eastern women living in Auckland, New Zealand (Part-I), and to determine serum 25-hydroxyvitamin D (serum-25(OH)D) response to two prescribed vitamin D dosages (Part-II) in this population. Women aged ⩾20 (n=43) participated in a cross-sectional pilot study during winter (Part-I). In Part-II, women aged 20-50 years (n=62) participated in a randomised, double-blind placebo-controlled trial consuming monthly either 50,000, 100,000 IU vitamin D3 or placebo for 6 months (winter to summer). All women in Part-I and 60% women in Part-II had serum-25(OH)Dpopulation. Monthly 100,000 IU vitamin D for 6 months is more effective than 50,000 IU in achieving serum-25(OH)D ⩾75 nmol/l; however, a third of women still did not achieve these levels.

  9. Mesh-morphing algorithms for specimen-specific finite element modeling.

    Science.gov (United States)

    Sigal, Ian A; Hardisty, Michael R; Whyne, Cari M

    2008-01-01

    Despite recent advances in software for meshing specimen-specific geometries, considerable effort is still often required to produce and analyze specimen-specific models suitable for biomechanical analysis through finite element modeling. We hypothesize that it is possible to obtain accurate models by adapting a pre-existing geometry to represent a target specimen using morphing techniques. Here we present two algorithms for morphing, automated wrapping (AW) and manual landmarks (ML), and demonstrate their use to prepare specimen-specific models of caudal rat vertebrae. We evaluate the algorithms by measuring the distance between target and morphed geometries and by comparing response to axial loading simulated with finite element (FE) methods. First a traditional reconstruction process based on microCT was used to obtain two natural specimen-specific FE models. Next, the two morphing algorithms were used to compute mappings from the surface of one model, the source, to the other, the target, and to use this mapping to morph the source mesh to produce a target mesh. The microCT images were then used to assign element-specific material properties. In AW the mappings were obtained by wrapping the source and target surfaces with an auxiliary triangulated surface. In ML, landmarks were manually placed on corresponding locations on the surfaces of both source and target. Both morphing algorithms were successful in reproducing the shape of the target vertebra with a median distance between natural and morphed models of 18.8 and 32.2 microm, respectively, for AW and ML. Whereas AW-morphing produced a surface more closely resembling that of the target, ML guaranteed correspondence of the landmark locations between source and target. Morphing preserved the quality of the mesh producing models suitable for FE simulation. Moreover, there were only minor differences between natural and morphed models in predictions of deformation, strain and stress. We therefore conclude that

  10. Near infrared spectrometric technique for testing fruit quality: optimisation of regression models using genetic algorithms

    Science.gov (United States)

    Isingizwe Nturambirwe, J. Frédéric; Perold, Willem J.; Opara, Umezuruike L.

    2016-02-01

    Near infrared (NIR) spectroscopy has gained extensive use in quality evaluation. It is arguably one of the most advanced spectroscopic tools in non-destructive quality testing of food stuff, from measurement to data analysis and interpretation. NIR spectral data are interpreted through means often involving multivariate statistical analysis, sometimes associated with optimisation techniques for model improvement. The objective of this research was to explore the extent to which genetic algorithms (GA) can be used to enhance model development, for predicting fruit quality. Apple fruits were used, and NIR spectra in the range from 12000 to 4000 cm-1 were acquired on both bruised and healthy tissues, with different degrees of mechanical damage. GAs were used in combination with partial least squares regression methods to develop bruise severity prediction models, and compared to PLS models developed using the full NIR spectrum. A classification model was developed, which clearly separated bruised from unbruised apple tissue. GAs helped improve prediction models by over 10%, in comparison with full spectrum-based models, as evaluated in terms of error of prediction (Root Mean Square Error of Cross-validation). PLS models to predict internal quality, such as sugar content and acidity were developed and compared to the versions optimized by genetic algorithm. Overall, the results highlighted the potential use of GA method to improve speed and accuracy of fruit quality prediction.

  11. Fast parallel algorithm for three-dimensional distance-driven model in iterative computed tomography reconstruction

    International Nuclear Information System (INIS)

    Chen Jian-Lin; Li Lei; Wang Lin-Yuan; Cai Ai-Long; Xi Xiao-Qi; Zhang Han-Ming; Li Jian-Xin; Yan Bin

    2015-01-01

    The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. (paper)

  12. Stochastic time-dependent vehicle routing problem: Mathematical models and ant colony algorithm

    Directory of Open Access Journals (Sweden)

    Zhengyu Duan

    2015-11-01

    Full Text Available This article addresses the stochastic time-dependent vehicle routing problem. Two mathematical models named robust optimal schedule time model and minimum expected schedule time model are proposed for stochastic time-dependent vehicle routing problem, which can guarantee delivery within the time windows of customers. The robust optimal schedule time model only requires the variation range of link travel time, which can be conveniently derived from historical traffic data. In addition, the robust optimal schedule time model based on robust optimization method can be converted into a time-dependent vehicle routing problem. Moreover, an ant colony optimization algorithm is designed to solve stochastic time-dependent vehicle routing problem. As the improvements in initial solution and transition probability, ant colony optimization algorithm has a good performance in convergence. Through computational instances and Monte Carlo simulation tests, robust optimal schedule time model is proved to be better than minimum expected schedule time model in computational efficiency and coping with the travel time fluctuations. Therefore, robust optimal schedule time model is applicable in real road network.

  13. Classification of Flying Insects with high performance using improved DTW algorithm based on hidden Markov model

    Directory of Open Access Journals (Sweden)

    S. Arif Abdul Rahuman

    Full Text Available ABSTRACT Insects play significant role in the human life. And insects pollinate major food crops consumed in the world. Insect pests consume and destroy major crops in the world. Hence to have control over the disease and pests, researches are going on in the area of entomology using chemical, biological and mechanical approaches. The data relevant to the flying insects often changes over time, and classification of such data is a central issue. And such time series mining tasks along with classification is critical nowadays. Most time series data mining algorithms use similarity search and hence time taken for similarity search is the bottleneck and it does not produce accurate results and also produces very poor performance. In this paper, a novel classification method that is based on the dynamic time warping (DTW algorithm is proposed. The dynamic time warping algorithm is deterministic and lacks in modeling stochastic signals. The dynamic time warping (DTW algorithm is improved by implementing a nonlinear median filtering (NMF. Recognition accuracy of conventional DTW algorithms is less than that of the hidden Markov model (HMM by same voice activity detection (VAD and noise-reduction. With running spectrum filtering (RSF and dynamic range adjustment (DRA. NMF seek the median distance of every reference of time series data and the recognition accuracy is much improved. In this research work, optical sensors are used to record the sound of insect flight, with invariance to interference from ambient sounds. The implementation of our tool includes two parts, an optical sensor to record the "sound" of insect flight, and a software that leverages on the sensor information, to automatically detect and identify flying insects.

  14. A study on a new algorithm to optimize ball mill system based on modeling and GA

    International Nuclear Information System (INIS)

    Wang Heng; Jia Minping; Huang Peng; Chen Zuoliang

    2010-01-01

    Aiming at the disadvantage of conventional optimization method for ball mill pulverizing system, a novel approach based on RBF neural network and genetic algorithm was proposed in the present paper. Firstly, the experiments and measurement for fill level based on vibration signals of mill shell was introduced. Then, main factors which affected the power consumption of ball mill pulverizing system were analyzed, and the input variables of RBF neural network were determined. RBF neural network was used to map the complex non-linear relationship between the electric consumption and process parameters and the non-linear model of power consumption was built. Finally, the model was optimized by genetic algorithm and the optimal work conditions of ball mill pulverizing system were determined. The results demonstrate that the method is reliable and practical, and can reduce the electric consumption obviously and effectively.

  15. Modified Hyperspheres Algorithm to Trace Homotopy Curves of Nonlinear Circuits Composed by Piecewise Linear Modelled Devices

    Directory of Open Access Journals (Sweden)

    H. Vazquez-Leal

    2014-01-01

    Full Text Available We present a homotopy continuation method (HCM for finding multiple operating points of nonlinear circuits composed of devices modelled by using piecewise linear (PWL representations. We propose an adaptation of the modified spheres path tracking algorithm to trace the homotopy trajectories of PWL circuits. In order to assess the benefits of this proposal, four nonlinear circuits composed of piecewise linear modelled devices are analysed to determine their multiple operating points. The results show that HCM can find multiple solutions within a single homotopy trajectory. Furthermore, we take advantage of the fact that homotopy trajectories are PWL curves meant to replace the multidimensional interpolation and fine tuning stages of the path tracking algorithm with a simple and highly accurate procedure based on the parametric straight line equation.

  16. A robust model predictive control algorithm for uncertain nonlinear systems that guarantees resolvability

    Science.gov (United States)

    Acikmese, Ahmet Behcet; Carson, John M., III

    2006-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees resolvability. With resolvability, initial feasibility of the finite-horizon optimal control problem implies future feasibility in a receding-horizon framework. The control consists of two components; (i) feed-forward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives and derivatives in polytopes. An illustrative numerical example is also provided.

  17. Small Body GN&C Research Report: A Robust Model Predictive Control Algorithm with Guaranteed Resolvability

    Science.gov (United States)

    Acikmese, Behcet A.; Carson, John M., III

    2005-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees the resolvability of the associated finite-horizon optimal control problem in a receding-horizon implementation. The control consists of two components; (i) feedforward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives, and derivatives in polytopes. An illustrative numerical example is also provided.

  18. Path generation algorithm for UML graphic modeling of aerospace test software

    Science.gov (United States)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao

    2018-03-01

    Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.

  19. PARALLEL ADAPTIVE MULTILEVEL SAMPLING ALGORITHMS FOR THE BAYESIAN ANALYSIS OF MATHEMATICAL MODELS

    KAUST Repository

    Prudencio, Ernesto

    2012-01-01

    In recent years, Bayesian model updating techniques based on measured data have been applied to many engineering and applied science problems. At the same time, parallel computational platforms are becoming increasingly more powerful and are being used more frequently by the engineering and scientific communities. Bayesian techniques usually require the evaluation of multi-dimensional integrals related to the posterior probability density function (PDF) of uncertain model parameters. The fact that such integrals cannot be computed analytically motivates the research of stochastic simulation methods for sampling posterior PDFs. One such algorithm is the adaptive multilevel stochastic simulation algorithm (AMSSA). In this paper we discuss the parallelization of AMSSA, formulating the necessary load balancing step as a binary integer programming problem. We present a variety of results showing the effectiveness of load balancing on the overall performance of AMSSA in a parallel computational environment.

  20. Caco-2 cell permeability modelling: a neural network coupled genetic algorithm approach

    Science.gov (United States)

    Di Fenza, Armida; Alagona, Giuliano; Ghio, Caterina; Leonardi, Riccardo; Giolitti, Alessandro; Madami, Andrea

    2007-04-01

    The ability to cross the intestinal cell membrane is a fundamental prerequisite of a drug compound. However, the experimental measurement of such an important property is a costly and highly time consuming step of the drug development process because it is necessary to synthesize the compound first. Therefore, in silico modelling of intestinal absorption, which can be carried out at very early stages of drug design, is an appealing alternative procedure which is based mainly on multivariate statistical analysis such as partial least squares (PLS) and neural networks (NN). Our implementation of neural network models for the prediction of intestinal absorption is based on the correlation of Caco-2 cell apparent permeability ( P app) values, as a measure of intestinal absorption, to the structures of two different data sets of drug candidates. Several molecular descriptors of the compounds were calculated and the optimal subsets were selected using a genetic algorithm; therefore, the method was indicated as Genetic Algorithm-Neural Network (GA-NN). A methodology combining a genetic algorithm search with neural network analysis applied to the modelling of Caco-2 P app has never been presented before, although the two procedures have been already employed separately. Moreover, we provide new Caco-2 cell permeability measurements for more than two hundred compounds. Interestingly, the selected descriptors show to possess physico-chemical connotations which are in excellent accordance with the well known relevant molecular properties involved in the cellular membrane permeation phenomenon: hydrophilicity, hydrogen bonding propensity, hydrophobicity and molecular size. The predictive ability of the models, although rather good for a preliminary study, is somewhat affected by the poor precision of the experimental Caco-2 measurements. Finally, the generalization ability of one model was checked on an external test set not derived from the data sets used to build the models

  1. Genetic algorithms used for PWRs refuel management automatic optimization: a new modelling

    International Nuclear Information System (INIS)

    Chapot, Jorge Luiz C.; Schirru, Roberto; Silva, Fernando Carvalho da

    1996-01-01

    A Genetic Algorithms-based system, linking the computer codes GENESIS 5.0 and ANC through the interface ALGER, has been developed aiming the PWRs fuel management optimization. An innovative codification, the Lists Model, has been incorporated to the genetic system, which avoids the use of variants of the standard crossover operator and generates only valid loading patterns in the core. The GENESIS/ALGER/ANC system has been successfully tested in an optimization study for Angra-1 second cycle. (author)

  2. A Model Predictive Algorithm for Active Control of Nonlinear Noise Processes

    Directory of Open Access Journals (Sweden)

    Qi-Zhi Zhang

    2005-01-01

    Full Text Available In this paper, an improved nonlinear Active Noise Control (ANC system is achieved by introducing an appropriate secondary source. For ANC system to be successfully implemented, the nonlinearity of the primary path and time delay of the secondary path must be overcome. A nonlinear Model Predictive Control (MPC strategy is introduced to deal with the time delay in the secondary path and the nonlinearity in the primary path of the ANC system. An overall online modeling technique is utilized for online secondary path and primary path estimation. The secondary path is estimated using an adaptive FIR filter, and the primary path is estimated using a Neural Network (NN. The two models are connected in parallel with the two paths. In this system, the mutual disturbances between the operation of the nonlinear ANC controller and modeling of the secondary can be greatly reduced. The coefficients of the adaptive FIR filter and weight vector of NN are adjusted online. Computer simulations are carried out to compare the proposed nonlinear MPC method with the nonlinear Filter-x Least Mean Square (FXLMS algorithm. The results showed that the convergence speed of the proposed nonlinear MPC algorithm is faster than that of nonlinear FXLMS algorithm. For testing the robust performance of the proposed nonlinear ANC system, the sudden changes in the secondary path and primary path of the ANC system are considered. Results indicated that the proposed nonlinear ANC system can rapidly track the sudden changes in the acoustic paths of the nonlinear ANC system, and ensure the adaptive algorithm stable when the nonlinear ANC system is time variable.

  3. A Dantzig-Wolfe decomposition algorithm for linear economic model predictive control of dynamically decoupled subsystems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Standardi, Laura; Edlund, Kristian

    2014-01-01

    This paper presents a warm-started Dantzig–Wolfe decomposition algorithm tailored to economic model predictive control of dynamically decoupled subsystems. We formulate the constrained optimal control problem solved at each sampling instant as a linear program with state space constraints, input ....... In the presence of process and measurement noise, such a regularization term is critical for achieving a well-behaved closed-loop performance....

  4. A model for predicting peritoneal dialysis patients’ survival, using data mining algorithms

    Directory of Open Access Journals (Sweden)

    Farzad Firouzi Jahantigh

    2018-01-01

    Conclusion: An accurate prediction model would be a potentially useful way to evaluate patients’ survival at peritoneal dialysis that increased clinical scrutiny and timely intervention could be brought to bear. So, in this research, the multi-space mapped binary tree support vector machine algorithm has a high precision in predicting the survival of continuous ambulatory peritoneal dialysis patients considering multiple evaluation indices and different class distribution functions.

  5. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta

    OpenAIRE

    Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J.

    2010-01-01

    Summary: PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactive...

  6. Decentralized Fuzzy P-hub Centre Problem: Extended Model and Genetic Algorithms

    OpenAIRE

    Sara Mousavinia; Majid Khalili; Mohammad Shafiee

    2017-01-01

    This paper studies the uncapacitated P-hub center problem in a network under decentralized management assuming time as a fuzzy variable. In this network, transport companies act independently, each company makes its route choices according to its own criteria. In this model, time is presented by triangular fuzzy number and used to calculate the fraction of users that probably choose hub routes instead of direct routes. To solve the problem, two genetic algorithms are proposed. The computation...

  7. Modeling of temperatures by using the algorithm of queue burning movement in the UCG Process

    Directory of Open Access Journals (Sweden)

    Milan Durdán

    2015-10-01

    Full Text Available In this contribution, a proposal of the system for indirect measurement temperatures in the underground coal gasification (UCG process is presented. A two-dimensional solution results from the Fourier partial differential equation of the heat conduction was used for the calculation of the temperature field in the real coal seam. An algorithm of queue burning movement for modeling the boundary conditions in gasification channel was created. Indirect measurement temperatures system was verified in the laboratory conditions.

  8. Recurrent neural network-based modeling of gene regulatory network using elephant swarm water search algorithm.

    Science.gov (United States)

    Mandal, Sudip; Saha, Goutam; Pal, Rajat Kumar

    2017-08-01

    Correct inference of genetic regulations inside a cell from the biological database like time series microarray data is one of the greatest challenges in post genomic era for biologists and researchers. Recurrent Neural Network (RNN) is one of the most popular and simple approach to model the dynamics as well as to infer correct dependencies among genes. Inspired by the behavior of social elephants, we propose a new metaheuristic namely Elephant Swarm Water Search Algorithm (ESWSA) to infer Gene Regulatory Network (GRN). This algorithm is mainly based on the water search strategy of intelligent and social elephants during drought, utilizing the different types of communication techniques. Initially, the algorithm is tested against benchmark small and medium scale artificial genetic networks without and with presence of different noise levels and the efficiency was observed in term of parametric error, minimum fitness value, execution time, accuracy of prediction of true regulation, etc. Next, the proposed algorithm is tested against the real time gene expression data of Escherichia Coli SOS Network and results were also compared with others state of the art optimization methods. The experimental results suggest that ESWSA is very efficient for GRN inference problem and performs better than other methods in many ways.

  9. The knowledge instinct, cognitive algorithms, modeling of language and cultural evolution

    Science.gov (United States)

    Perlovsky, Leonid I.

    2008-04-01

    The talk discusses mechanisms of the mind and their engineering applications. The past attempts at designing "intelligent systems" encountered mathematical difficulties related to algorithmic complexity. The culprit turned out to be logic, which in one way or another was used not only in logic rule systems, but also in statistical, neural, and fuzzy systems. Algorithmic complexity is related to Godel's theory, a most fundamental mathematical result. These difficulties were overcome by replacing logic with a dynamic process "from vague to crisp," dynamic logic. It leads to algorithms overcoming combinatorial complexity, and resulting in orders of magnitude improvement in classical problems of detection, tracking, fusion, and prediction in noise. I present engineering applications to pattern recognition, detection, tracking, fusion, financial predictions, and Internet search engines. Mathematical and engineering efficiency of dynamic logic can also be understood as cognitive algorithm, which describes fundamental property of the mind, the knowledge instinct responsible for all our higher cognitive functions: concepts, perception, cognition, instincts, imaginations, intuitions, emotions, including emotions of the beautiful. I present our latest results in modeling evolution of languages and cultures, their interactions in these processes, and role of music in cultural evolution. Experimental data is presented that support the theory. Future directions are outlined.

  10. Discovering link communities in complex networks by an integer programming model and a genetic algorithm.

    Directory of Open Access Journals (Sweden)

    Zhenping Li

    Full Text Available Identification of communities in complex networks is an important topic and issue in many fields such as sociology, biology, and computer science. Communities are often defined as groups of related nodes or links that correspond to functional subunits in the corresponding complex systems. While most conventional approaches have focused on discovering communities of nodes, some recent studies start partitioning links to find overlapping communities straightforwardly. In this paper, we propose a new quantity function for link community identification in complex networks. Based on this quantity function we formulate the link community partition problem into an integer programming model which allows us to partition a complex network into overlapping communities. We further propose a genetic algorithm for link community detection which can partition a network into overlapping communities without knowing the number of communities. We test our model and algorithm on both artificial networks and real-world networks. The results demonstrate that the model and algorithm are efficient in detecting overlapping community structure in complex networks.

  11. Advanced Emergency Braking Control Based on a Nonlinear Model Predictive Algorithm for Intelligent Vehicles

    Directory of Open Access Journals (Sweden)

    Ronghui Zhang

    2017-05-01

    Full Text Available Focusing on safety, comfort and with an overall aim of the comprehensive improvement of a vision-based intelligent vehicle, a novel Advanced Emergency Braking System (AEBS is proposed based on Nonlinear Model Predictive Algorithm. Considering the nonlinearities of vehicle dynamics, a vision-based longitudinal vehicle dynamics model is established. On account of the nonlinear coupling characteristics of the driver, surroundings, and vehicle itself, a hierarchical control structure is proposed to decouple and coordinate the system. To avoid or reduce the collision risk between the intelligent vehicle and collision objects, a coordinated cost function of tracking safety, comfort, and fuel economy is formulated. Based on the terminal constraints of stable tracking, a multi-objective optimization controller is proposed using the theory of non-linear model predictive control. To quickly and precisely track control target in a finite time, an electronic brake controller for AEBS is designed based on the Nonsingular Fast Terminal Sliding Mode (NFTSM control theory. To validate the performance and advantages of the proposed algorithm, simulations are implemented. According to the simulation results, the proposed algorithm has better integrated performance in reducing the collision risk and improving the driving comfort and fuel economy of the smart car compared with the existing single AEBS.

  12. Optimizing ion channel models using a parallel genetic algorithm on graphical processors.

    Science.gov (United States)

    Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon

    2012-01-01

    We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Algorithmic modeling of the irrelevant sound effect (ISE) by the hearing sensation fluctuation strength.

    Science.gov (United States)

    Schlittmeier, Sabine J; Weissgerber, Tobias; Kerber, Stefan; Fastl, Hugo; Hellbrück, Jürgen

    2012-01-01

    Background sounds, such as narration, music with prominent staccato passages, and office noise impair verbal short-term memory even when these sounds are irrelevant. This irrelevant sound effect (ISE) is evoked by so-called changing-state sounds that are characterized by a distinct temporal structure with varying successive auditory-perceptive tokens. However, because of the absence of an appropriate psychoacoustically based instrumental measure, the disturbing impact of a given speech or nonspeech sound could not be predicted until now, but necessitated behavioral testing. Our database for parametric modeling of the ISE included approximately 40 background sounds (e.g., speech, music, tone sequences, office noise, traffic noise) and corresponding performance data that was collected from 70 behavioral measurements of verbal short-term memory. The hearing sensation fluctuation strength was chosen to model the ISE and describes the percept of fluctuations when listening to slowly modulated sounds (f(mod) background sounds, the algorithm estimated behavioral performance data in 63 of 70 cases within the interquartile ranges. In particular, all real-world sounds were modeled adequately, whereas the algorithm overestimated the (non-)disturbance impact of synthetic steady-state sounds that were constituted by a repeated vowel or tone. Implications of the algorithm's strengths and prediction errors are discussed.

  14. A Novel Algorithm for Intrusion Detection Based on RASL Model Checking

    Directory of Open Access Journals (Sweden)

    Weijun Zhu

    2013-01-01

    Full Text Available The interval temporal logic (ITL model checking (MC technique enhances the power of intrusion detection systems (IDSs to detect concurrent attacks due to the strong expressive power of ITL. However, an ITL formula suffers from difficulty in the description of the time constraints between different actions in the same attack. To address this problem, we formalize a novel real-time interval temporal logic—real-time attack signature logic (RASL. Based on such a new logic, we put forward a RASL model checking algorithm. Furthermore, we use RASL formulas to describe attack signatures and employ discrete timed automata to create an audit log. As a result, RASL model checking algorithm can be used to automatically verify whether the automata satisfy the formulas, that is, whether the audit log coincides with the attack signatures. The simulation experiments show that the new approach effectively enhances the detection power of the MC-based intrusion detection methods for a number of telnet attacks, p-trace attacks, and the other sixteen types of attacks. And these experiments indicate that the new algorithm can find several types of real-time attacks, whereas the existing MC-based intrusion detection approaches cannot do that.

  15. Discovering link communities in complex networks by an integer programming model and a genetic algorithm.

    Science.gov (United States)

    Li, Zhenping; Zhang, Xiang-Sun; Wang, Rui-Sheng; Liu, Hongwei; Zhang, Shihua

    2013-01-01

    Identification of communities in complex networks is an important topic and issue in many fields such as sociology, biology, and computer science. Communities are often defined as groups of related nodes or links that correspond to functional subunits in the corresponding complex systems. While most conventional approaches have focused on discovering communities of nodes, some recent studies start partitioning links to find overlapping communities straightforwardly. In this paper, we propose a new quantity function for link community identification in complex networks. Based on this quantity function we formulate the link community partition problem into an integer programming model which allows us to partition a complex network into overlapping communities. We further propose a genetic algorithm for link community detection which can partition a network into overlapping communities without knowing the number of communities. We test our model and algorithm on both artificial networks and real-world networks. The results demonstrate that the model and algorithm are efficient in detecting overlapping community structure in complex networks.

  16. Application of Harmony Search algorithm to the solution of groundwater management models

    Science.gov (United States)

    Tamer Ayvaz, M.

    2009-06-01

    This study proposes a groundwater resources management model in which the solution is performed through a combined simulation-optimization model. A modular three-dimensional finite difference groundwater flow model, MODFLOW is used as the simulation model. This model is then combined with a Harmony Search (HS) optimization algorithm which is based on the musical process of searching for a perfect state of harmony. The performance of the proposed HS based management model is tested on three separate groundwater management problems: (i) maximization of total pumping from an aquifer (steady-state); (ii) minimization of the total pumping cost to satisfy the given demand (steady-state); and (iii) minimization of the pumping cost to satisfy the given demand for multiple management periods (transient). The sensitivity of HS algorithm is evaluated by performing a sensitivity analysis which aims to determine the impact of related solution parameters on convergence behavior. The results show that HS yields nearly same or better solutions than the previous solution methods and may be used to solve management problems in groundwater modeling.

  17. Emulation of a Kalman Filter algorithm on a diffusive flood wave propagation model

    Science.gov (United States)

    Pannekoucke, O.; Ricci, S. M.; Ninove, F.; Thual, O.

    2011-12-01

    River stream flow forecasting is a critical issue for the security of people and infrastructures, the function of power plants, and water resources management. The benefit of data assimilation for free-surface flow simulation and flood forecasting has already been demonstrated as it is applied to optimize model parameters and to improve simulated water level and discharge state [1]. The correction of the hydraulic state with a Kalman Filter algorithm implies the propagation of the background error covariance matrix B by the dynamics of the model. This step requires the formulation and the integration in time of the tangent linear approximation of the model, which is generally fastidious and costly. The aim of this study is to describe the evolution of the background error covariance function with the Kalman Filter algorithm applied to a 1D diffuse flood wave propagation model. For this simplified model, the formulation of the tangent linear model as well as the propagation of B is affordable as opposed as for an operational hydraulics model solving the shallow water equations. Starting from Gaussian background covariance functions, it was first shown that the diffusive flood wave propagation model increases the correlation length and that the propagated covariance function can be approximated by a Gaussian. Working with a steady observation network, it was then demonstrated that the analysis and propagation steps of the Kalman Filter modify the covariance function at the observation point. The resulting covariance function at the observation point is inhomogeneous, with a shorter correlation length downstream of the observation point than upstream. The diagnosed correlation lengths [2] were used to build a parametrized covariance matrix using a diffusion operator with an inhomogenous diffusion coefficient [3]. This approach led to the formulation of a parametrized background error covariance matrix where the evolution of the covariance function with the Kalman

  18. Inventory and control in Material Balance Area in the Boris Kidric Institute (MBA IU-B) - Report for 1976

    International Nuclear Information System (INIS)

    Martinc, R.

    1977-11-01

    This report is related to fulfilling the obligations originating from the Nonproliferation Treaty in the field of nuclear material inventory in Material Balance Area (MBA) in the Institute of Nuclear Sciences Boris Kidric, Vinca (IU-B ). Report covers the activities completed during 1976, but contains as well experiences in the field of calculation and control of nuclear materials in IU-B from 1974-1976. It shows the review of routine operations and procedures within the calculation and operational documentation of the facilities containing controlled nuclear material as well as verification activities of the IAEA, Vienna. Research results related to calculation procedures are enclosed. They were used to prepare relevant documentation and reports for IAEA (which are obligatory according to the implementation of NPT). The ratio of effective and indicated RA reactor power was analyzed dependent on the fuel utilization regime. A computer code was written for calculating the RA reactor fuel burnup. Work was initiated on application of the nondestructive method for determining the relative quantity of U-235 in 80% enriched fuel elements based on gamma spectrometry. The series of methods and procedures for calculating and control of nuclear materials in IU-B (as a part of national system for calculation and control of nuclear materials) was treated in context of possibilities and needs for establishing relevant regulations in the Boris Kidric Institute [sr

  19. Global identifiability of linear compartmental models--a computer algebra algorithm.

    Science.gov (United States)

    Audoly, S; D'Angiò, L; Saccomani, M P; Cobelli, C

    1998-01-01

    A priori global identifiability deals with the uniqueness of the solution for the unknown parameters of a model and is, thus, a prerequisite for parameter estimation of biological dynamic models. Global identifiability is however difficult to test, since it requires solving a system of algebraic nonlinear equations which increases both in nonlinearity degree and number of terms and unknowns with increasing model order. In this paper, a computer algebra tool, GLOBI (GLOBal Identifiability) is presented, which combines the topological transfer function method with the Buchberger algorithm, to test global identifiability of linear compartmental models. GLOBI allows for the automatic testing of a priori global identifiability of general structure compartmental models from general multi input-multi output experiments. Examples of usage of GLOBI to analyze a priori global identifiability of some complex biological compartmental models are provided.

  20. Continuous time boolean modeling for biological signaling: application of Gillespie algorithm

    Directory of Open Access Journals (Sweden)

    Stoll Gautier

    2012-08-01

    Full Text Available Abstract Mathematical modeling is used as a Systems Biology tool to answer biological questions, and more precisely, to validate a network that describes biological observations and predict the effect of perturbations. This article presents an algorithm for modeling biological networks in a discrete framework with continuous time. Background There exist two major types of mathematical modeling approaches: (1 quantitative modeling, representing various chemical species concentrations by real numbers, mainly based on differential equations and chemical kinetics formalism; (2 and qualitative modeling, representing chemical species concentrations or activities by a finite set of discrete values. Both approaches answer particular (and often different biological questions. Qualitative modeling approach permits a simple and less detailed description of the biological systems, efficiently describes stable state identification but remains inconvenient in describing the transient kinetics leading to these states. In this context, time is represented by discrete steps. Quantitative modeling, on the other hand, can describe more accurately the dynamical behavior of biological processes as it follows the evolution of concentration or activities of chemical species as a function of time, but requires an important amount of information on the parameters difficult to find in the literature. Results Here, we propose a modeling framework based on a qualitative approach that is intrinsically continuous in time. The algorithm presented in this article fills the gap between qualitative and quantitative modeling. It is based on continuous time Markov process applied on a Boolean state space. In order to describe the temporal evolution of the biological process we wish to model, we explicitly specify the transition rates for each node. For that purpose, we built a language that can be seen as a generalization of Boolean equations. Mathematically, this approach can be

  1. DISTING: A web application for fast algorithmic computation of alternative indistinguishable linear compartmental models.

    Science.gov (United States)

    Davidson, Natalie R; Godfrey, Keith R; Alquaddoomi, Faisal; Nola, David; DiStefano, Joseph J

    2017-05-01

    We describe and illustrate use of DISTING, a novel web application for computing alternative structurally identifiable linear compartmental models that are input-output indistinguishable from a postulated linear compartmental model. Several computer packages are available for analysing the structural identifiability of such models, but DISTING is the first to be made available for assessing indistinguishability. The computational algorithms embedded in DISTING are based on advanced versions of established geometric and algebraic properties of linear compartmental models, embedded in a user-friendly graphic model user interface. Novel computational tools greatly speed up the overall procedure. These include algorithms for Jacobian matrix reduction, submatrix rank reduction, and parallelization of candidate rank computations in symbolic matrix analysis. The application of DISTING to three postulated models with respectively two, three and four compartments is given. The 2-compartment example is used to illustrate the indistinguishability problem; the original (unidentifiable) model is found to have two structurally identifiable models that are indistinguishable from it. The 3-compartment example has three structurally identifiable indistinguishable models. It is found from DISTING that the four-compartment example has five structurally identifiable models indistinguishable from the original postulated model. This example shows that care is needed when dealing with models that have two or more compartments which are neither perturbed nor observed, because the numbering of these compartments may be arbitrary. DISTING is universally and freely available via the Internet. It is easy to use and circumvents tedious and complicated algebraic analysis previously done by hand. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. An Adaptive Agent-Based Model of Homing Pigeons: A Genetic Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Francis Oloo

    2017-01-01

    Full Text Available Conventionally, agent-based modelling approaches start from a conceptual model capturing the theoretical understanding of the systems of interest. Simulation outcomes are then used “at the end” to validate the conceptual understanding. In today’s data rich era, there are suggestions that models should be data-driven. Data-driven workflows are common in mathematical models. However, their application to agent-based models is still in its infancy. Integration of real-time sensor data into modelling workflows opens up the possibility of comparing simulations against real data during the model run. Calibration and validation procedures thus become automated processes that are iteratively executed during the simulation. We hypothesize that incorporation of real-time sensor data into agent-based models improves the predictive ability of such models. In particular, that such integration results in increasingly well calibrated model parameters and rule sets. In this contribution, we explore this question by implementing a flocking model that evolves in real-time. Specifically, we use genetic algorithms approach to simulate representative parameters to describe flight routes of homing pigeons. The navigation parameters of pigeons are simulated and dynamically evaluated against emulated GPS sensor data streams and optimised based on the fitness of candidate parameters. As a result, the model was able to accurately simulate the relative-turn angles and step-distance of homing pigeons. Further, the optimised parameters could replicate loops, which are common patterns in flight tracks of homing pigeons. Finally, the use of genetic algorithms in this study allowed for a simultaneous data-driven optimization and sensitivity analysis.

  3. Solution algorithm of dwell time in slope-based figuring model

    Science.gov (United States)

    Li, Yong; Zhou, Lin

    2017-10-01

    Surface slope profile is commonly used to evaluate X-ray reflective optics, which is used in synchrotron radiation beam. Moreover, the measurement result of measuring instrument for X-ray reflective optics is usually the surface slope profile rather than the surface height profile. To avoid the conversion error, the slope-based figuring model is introduced introduced by processing the X-ray reflective optics based on surface height-based model. However, the pulse iteration method, which can quickly obtain the dell time solution of the traditional height-based figuring model, is not applied to the slope-based figuring model because property of the slope removal function have both positive and negative values and complex asymmetric structure. To overcome this problem, we established the optimal mathematical model for the dwell time solution, By introducing the upper and lower limits of the dwell time and the time gradient constraint. Then we used the constrained least squares algorithm to solve the dwell time in slope-based figuring model. To validate the proposed algorithm, simulations and experiments are conducted. A flat mirror with effective aperture of 80 mm is polished on the ion beam machine. After iterative polishing three times, the surface slope profile error of the workpiece is converged from RMS 5.65 μrad to RMS 1.12 μrad.

  4. Emulation of an ensemble Kalman filter algorithm on a flood wave propagation model

    Science.gov (United States)

    Barthélémy, S.; Ricci, S.; Pannekoucke, O.; Thual, O.; Malaterre, P. O.

    2013-06-01

    This study describes the emulation of an Ensemble Kalman Filter (EnKF) algorithm on a 1-D flood wave propagation model. This model is forced at the upstream boundary with a random variable with gaussian statistics and a correlation function in time with gaussian shape. This allows for, in the case without assimilation, the analytical study of the covariance functions of the propagated signal anomaly. This study is validated numerically with an ensemble method. In the case with assimilation with one observation point, where synthetical observations are generated by adding an error to a true state, the dynamic of the background error covariance functions is not straightforward and a numerical approach using an EnKF algorithm is prefered. First, those numerical experiments show that both background error variance and correlation length scale are reduced at the observation point. This reduction of variance and correlation length scale is propagated downstream by the dynamics of the model. Then, it is shown that the application of a Best Linear Unbiased Estimator (BLUE) algorithm using the background error covariance matrix converged from the EnKF algorithm, provides the same results as the EnKF but with a cheaper computational cost, thus allowing for the use of data assimilation in the context of real time flood forecasting. Moreover it was demonstrated that the reduction of background error correlation length scale and variance at the observation point depends on the error observation statistics. This feature is quantified by abacus built from linear regressions over a limited set of EnKF experiments. These abacus that describe the background error variance and the correlation length scale in the neighboring of the observation point combined with analytical expressions that describe the background error variance and the correlation length scale away from the observation point provide parametrized models for the variance and the correlation length scale. Using this

  5. A structural dynamic factor model for the effects of monetary policy estimated by the EM algorithm

    DEFF Research Database (Denmark)

    Bork, Lasse

    This paper applies the maximum likelihood based EM algorithm to a large-dimensional factor analysis of US monetary policy. Specifically, economy-wide effects of shocks to the US federal funds rate are estimated in a structural dynamic factor model in which 100+ US macroeconomic and financial time...... as opposed to the orthogonal factors resulting from the popular principal component approach to structural factor models. Correlated factors are economically more sensible and important for a richer monetary policy transmission mechanism. Secondly, I consider both static factor loadings as well as dynamic...

  6. A PISO-like algorithm to simulate superfluid helium flow with the two-fluid model

    CERN Document Server

    Soulaine, Cyprien; Allain, Hervé; Baudouy, Bertrand; Van Weelderen, Rob

    2015-01-01

    This paper presents a segregated algorithm to solve numerically the superfluid helium (He II) equations using the two-fluid model. In order to validate the resulting code and illustrate its potential, different simulations have been performed. First, the flow through a capillary filled with He II with a heated area on one side is simulated and results are compared to analytical solutions in both Landau and Gorter–Mellink flow regimes. Then, transient heat transfer of a forced flow of He II is investigated. Finally, some two-dimensional simulations in a porous medium model are carried out.

  7. Parameter Estimation of a Plucked String Synthesis Model Using a Genetic Algorithm with Perceptual Fitness Calculation

    Directory of Open Access Journals (Sweden)

    Riionheimo Janne

    2003-01-01

    Full Text Available We describe a technique for estimating control parameters for a plucked string synthesis model using a genetic algorithm. The model has been intensively used for sound synthesis of various string instruments but the fine tuning of the parameters has been carried out with a semiautomatic method that requires some hand adjustment with human listening. An automated method for extracting the parameters from recorded tones is described in this paper. The calculation of the fitness function utilizes knowledge of the properties of human hearing.

  8. Genetic Algorithms for Agent-Based Infrastructure Interdependency Modeling and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    May Permann

    2007-03-01

    Today’s society relies greatly upon an array of complex national and international infrastructure networks such as transportation, electric power, telecommunication, and financial networks. This paper describes initial research combining agent-based infrastructure modeling software and genetic algorithms (GAs) to help optimize infrastructure protection and restoration decisions. This research proposes to apply GAs to the problem of infrastructure modeling and analysis in order to determine the optimum assets to restore or protect from attack or other disaster. This research is just commencing and therefore the focus of this paper is the integration of a GA optimization method with a simulation through the simulation’s agents.

  9. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Science.gov (United States)

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-01-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109

  10. Image-Guided Voronoi Aesthetic Patterns with an Uncertainty Algorithm Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Tao Wu

    2016-01-01

    Full Text Available Tessellation-based art is an important technique for computer aided aesthetic patterns generation, and Voronoi diagram plays a key role in the preprocessing, whose uncertainty mechanism is still a challenge. However, the existing techniques handle the uncertainty incompletely and unevenly, and the corresponding algorithms are not of high efficiency; thus it is impossible for users to obtain the results in real time. For a reference image, a Voronoi aesthetic pattern generation algorithm with uncertainty based on cloud model is proposed, including uncertain line representation using an extended cloud model and Voronoi polygon approximation filling with uncertainty. In view of the different parameters, seven groups of experiments and various experimental analyses are conducted. Compared with the related algorithms, the proposed technique performs better on running time, and its time complexity is approximatively linear related to the size of the input image. The experimental results show that it can produce visually similar effect with the frayed or cracked soil and has three advantages, that is, uncertainty, simplicity, and efficiency. The proposal can be a powerful alternative to the traditional methods and has a prospect of applications in the digital entertainment, home decoration, clothing design, and various fields.

  11. Incorporating a wheeled vehicle model in a new monocular visual odometry algorithm for dynamic outdoor environments.

    Science.gov (United States)

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-09-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments.

  12. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Yanhua Jiang

    2014-09-01

    Full Text Available This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments.

  13. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta.

    Science.gov (United States)

    Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J

    2010-03-01

    PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site.

  14. An Algorithm for Modelling the Impact of the Judicial Conflict-Resolution Process on Construction Investment

    Directory of Open Access Journals (Sweden)

    Andrej Bugajev

    2018-01-01

    Full Text Available In this article, the modelling of the judicial conflict-resolution process is considered from a construction investor’s point of view. Such modelling is important for improving the risk management for construction investors and supporting sustainable city development by supporting the development of rules regulating the construction process. Thus, this raises the problem of evaluation of different decisions and selection of the optimal one followed by distribution extraction. First, the example of such a process is analysed and schematically represented. Then, it is formalised as a graph, which is described in the form of a decision graph with cycles. We use some natural problem properties and provide the algorithm to convert this graph into a tree. Then, we propose the algorithm to evaluate profits for different scenarios with estimation of time, which is done by integration of an average daily costs function. Afterwards, the optimisation problem is solved and the optimal investor strategy is obtained—this allows one to extract the construction project profit distribution, which can be used for further analysis by standard risk (and other important information-evaluation techniques. The overall algorithm complexity is analysed, the computational experiment is performed and conclusions are formulated.

  15. Efficacy and tolerability of a high loading dose (25,000 IU weekly) vitamin D3 supplementation in obese children with vitamin D insufficiency/deficiency

    NARCIS (Netherlands)

    Radhakishun, Nalini N E; van Vliet, Mariska; Poland, Dennis C W; Weijer, Olivier; Beijnen, Jos H; Brandjes, Dees P M; Diamant, Michaela; von Rosenstiel, Ines A

    2014-01-01

    BACKGROUND: The recommended dose of vitamin D supplementation of 400 IU/day might be inadequate to treat obese children with vitamin D insufficiency. Therefore, we tested the efficacy and tolerability of a high loading dose vitamin D3 supplementation of 25,000 IU weekly in multiethnic obese

  16. An Evolutionary Search Algorithm for Covariate Models in Population Pharmacokinetic Analysis.

    Science.gov (United States)

    Yamashita, Fumiyoshi; Fujita, Atsuto; Sasa, Yukako; Higuchi, Yuriko; Tsuda, Masahiro; Hashida, Mitsuru

    2017-09-01

    Building a covariate model is a crucial task in population pharmacokinetics. This study develops a novel method for automated covariate modeling based on gene expression programming (GEP), which not only enables covariate selection, but also the construction of nonpolynomial relationships between pharmacokinetic parameters and covariates. To apply GEP to the extended nonlinear least squares analysis, the parameter consolidation and initial parameter value estimation algorithms were further developed and implemented. The entire program was coded in Java. The performance of the developed covariate model was evaluated for the population pharmacokinetic data of tobramycin. In comparison with the established covariate model, goodness-of-fit of the measured data was greatly improved by using only 2 additional adjustable parameters. Ten test runs yielded the same solution. In conclusion, the systematic exploration method is a potentially powerful tool for prescreening covariate models in population pharmacokinetic analysis. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  17. Parameter identification of ZnO surge arrester models based on genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bayadi, Abdelhafid [Laboratoire d' Automatique de Setif, Departement d' Electrotechnique, Faculte des Sciences de l' Ingenieur, Universite Ferhat ABBAS de Setif, Route de Bejaia Setif 19000 (Algeria)

    2008-07-15

    The correct and adequate modelling of ZnO surge arresters characteristics is very important for insulation coordination studies and systems reliability. In this context many researchers addressed considerable efforts to the development of surge arresters models to reproduce the dynamic characteristics observed in their behaviour when subjected to fast front impulse currents. The difficulties with these models reside essentially in the calculation and the adjustment of their parameters. This paper proposes a new technique based on genetic algorithm to obtain the best possible series of parameter values of ZnO surge arresters models. The validity of the predicted parameters is then checked by comparing the predicted results with the experimental results available in the literature. Using the ATP-EMTP package, an application of the arrester model on network system studies is presented and discussed. (author)

  18. Development of Mathematical Models for Investigating Maximal Power Point Tracking Algorithms

    Directory of Open Access Journals (Sweden)

    Dominykas Vasarevičius

    2012-04-01

    Full Text Available Solar cells generate maximum power only when the load is optimized according insolation and module temperature. This function is performed by MPPT systems. While developing MPPT, it is useful to create a mathematical model that allows the simulation of different weather conditions affecting solar modules. Solar insolation, cloud cover imitation and solar cell models have been created in Matlab/Simulink environment. Comparing the simulation of solar insolation on a cloudy day with the measurements made using a pyrometer show that the model generates signal changes according to the laws similar to those of a real life signal. The model can generate solar insolation values in real time, which is useful for predicting the amount of electrical energy produced from solar power. The model can operate with the help of using the stored signal, thus a comparison of different MPPT algorithms can be provided.Article in Lithuanian

  19. Fuzzy rule base design using tabu search algorithm for nonlinear system modeling.

    Science.gov (United States)

    Bagis, Aytekin

    2008-01-01

    This paper presents an approach to fuzzy rule base design using tabu search algorithm (TSA) for nonlinear system modeling. TSA is used to evolve the structure and the parameter of fuzzy rule base. The use of the TSA, in conjunction with a systematic neighbourhood structure for the determination of fuzzy rule base parameters, leads to a significant improvement in the performance of the model. To demonstrate the effectiveness of the presented method, several numerical examples given in the literature are examined. The results obtained by means of the identified fuzzy rule bases are compared with those belonging to other modeling approaches in the literature. The simulation results indicate that the method based on the use of a TSA performs an important and very effective modeling procedure in fuzzy rule base design in the modeling of the nonlinear or complex systems.

  20. Estimation of Contextual Effects through Nonlinear Multilevel Latent Variable Modeling with a Metropolis-Hastings Robbins-Monro Algorithm

    Science.gov (United States)

    Yang, Ji Seung; Cai, Li

    2014-01-01

    The main purpose of this study is to improve estimation efficiency in obtaining maximum marginal likelihood estimates of contextual effects in the framework of nonlinear multilevel latent variable model by adopting the Metropolis-Hastings Robbins-Monro algorithm (MH-RM). Results indicate that the MH-RM algorithm can produce estimates and standard…

  1. Algorithm development and verification of UASCM for multi-dimension and multi-group neutron kinetics model

    International Nuclear Information System (INIS)

    Si, S.

    2012-01-01

    The Universal Algorithm of Stiffness Confinement Method (UASCM) for neutron kinetics model of multi-dimensional and multi-group transport equations or diffusion equations has been developed. The numerical experiments based on transport theory code MGSNM and diffusion theory code MGNEM have demonstrated that the algorithm has sufficient accuracy and stability. (authors)

  2. Community Detection Algorithm Combining Stochastic Block Model and Attribute Data Clustering

    Science.gov (United States)

    Kataoka, Shun; Kobayashi, Takuto; Yasuda, Muneki; Tanaka, Kazuyuki

    2016-11-01

    We propose a new algorithm to detect the community structure in a network that utilizes both the network structure and vertex attribute data. Suppose we have the network structure together with the vertex attribute data, that is, the information assigned to each vertex associated with the community to which it belongs. The problem addressed this paper is the detection of the community structure from the information of both the network structure and the vertex attribute data. Our approach is based on the Bayesian approach that models the posterior probability distribution of the community labels. The detection of the community structure in our method is achieved by using belief propagation and an EM algorithm. We numerically verified the performance of our method using computer-generated networks and real-world networks.

  3. Dynamic connectivity algorithms for Monte Carlo simulations of the random-cluster model

    Science.gov (United States)

    Metin Elçi, Eren; Weigel, Martin

    2014-05-01

    We review Sweeny's algorithm for Monte Carlo simulations of the random cluster model. Straightforward implementations suffer from the problem of computational critical slowing down, where the computational effort per edge operation scales with a power of the system size. By using a tailored dynamic connectivity algorithm we are able to perform all operations with a poly-logarithmic computational effort. This approach is shown to be efficient in keeping online connectivity information and is of use for a number of applications also beyond cluster-update simulations, for instance in monitoring droplet shape transitions. As the handling of the relevant data structures is non-trivial, we provide a Python module with a full implementation for future reference.

  4. Projection pursuit water quality evaluation model based on chicken swam algorithm

    Science.gov (United States)

    Hu, Zhe

    2018-03-01

    In view of the uncertainty and ambiguity of each index in water quality evaluation, in order to solve the incompatibility of evaluation results of individual water quality indexes, a projection pursuit model based on chicken swam algorithm is proposed. The projection index function which can reflect the water quality condition is constructed, the chicken group algorithm (CSA) is introduced, the projection index function is optimized, the best projection direction of the projection index function is sought, and the best projection value is obtained to realize the water quality evaluation. The comparison between this method and other methods shows that it is reasonable and feasible to provide decision-making basis for water pollution control in the basin.

  5. Dynamic connectivity algorithms for Monte Carlo simulations of the random-cluster model

    International Nuclear Information System (INIS)

    Elçi, Eren Metin; Weigel, Martin

    2014-01-01

    We review Sweeny's algorithm for Monte Carlo simulations of the random cluster model. Straightforward implementations suffer from the problem of computational critical slowing down, where the computational effort per edge operation scales with a power of the system size. By using a tailored dynamic connectivity algorithm we are able to perform all operations with a poly-logarithmic computational effort. This approach is shown to be efficient in keeping online connectivity information and is of use for a number of applications also beyond cluster-update simulations, for instance in monitoring droplet shape transitions. As the handling of the relevant data structures is non-trivial, we provide a Python module with a full implementation for future reference.

  6. Model-independent nonlinear control algorithm with application to a liquid bridge experiment

    International Nuclear Information System (INIS)

    Petrov, V.; Haaning, A.; Muehlner, K.A.; Van Hook, S.J.; Swinney, H.L.

    1998-01-01

    We present a control method for high-dimensional nonlinear dynamical systems that can target remote unstable states without a priori knowledge of the underlying dynamical equations. The algorithm constructs a high-dimensional look-up table based on the system's responses to a sequence of random perturbations. The method is demonstrated by stabilizing unstable flow of a liquid bridge surface-tension-driven convection experiment that models the float zone refining process. Control of the dynamics is achieved by heating or cooling two thermoelectric Peltier devices placed in the vicinity of the liquid bridge surface. The algorithm routines along with several example programs written in the MATLAB language can be found at ftp://ftp.mathworks.com/pub/contrib/v5/control/nlcontrol. copyright 1998 The American Physical Society

  7. New Algorithms for Computing the Time-to-Collision in Freeway Traffic Simulation Models

    Directory of Open Access Journals (Sweden)

    Jia Hou

    2014-01-01

    Full Text Available Ways to estimate the time-to-collision are explored. In the context of traffic simulation models, classical lane-based notions of vehicle location are relaxed and new, fast, and efficient algorithms are examined. With trajectory conflicts being the main focus, computational procedures are explored which use a two-dimensional coordinate system to track the vehicle trajectories and assess conflicts. Vector-based kinematic variables are used to support the calculations. Algorithms based on boxes, circles, and ellipses are considered. Their performance is evaluated in the context of computational complexity and solution time. Results from these analyses suggest promise for effective and efficient analyses. A combined computation process is found to be very effective.

  8. Model and algorithm for bi-fuel vehicle routing problem to reduce GHG emissions.

    Science.gov (United States)

    Abdoli, Behroz; MirHassani, Seyed Ali; Hooshmand, Farnaz

    2017-09-01

    Because of the harmful effects of greenhouse gas (GHG) emitted by petroleum-based fuels, the adoption of alternative green fuels such as biodiesel and compressed natural gas (CNG) is an inevitable trend in the transportation sector. However, the transition to alternative fuel vehicle (AFV) fleets is not easy and, particularly at the beginning of the transition period, drivers may be forced to travel long distances to reach alternative fueling stations (AFSs). In this paper, the utilization of bi-fuel vehicles is proposed as an operational approach. We present a mathematical model to address vehicle routing problem (VRP) with bi-fuel vehicles and show that the utilization of bi-fuel vehicles can lead to a significant reduction in GHG emissions. Moreover, a simulated annealing algorithm is adopted to solve large instances of this problem. The performance of the proposed algorithm is evaluated on some random instances.

  9. Thickness determination in textile material design: dynamic modeling and numerical algorithms

    International Nuclear Information System (INIS)

    Xu, Dinghua; Ge, Meibao

    2012-01-01

    Textile material design is of paramount importance in the study of functional clothing design. It is therefore important to determine the dynamic heat and moisture transfer characteristics in the human body–clothing–environment system, which directly determine the heat–moisture comfort level of the human body. Based on a model of dynamic heat and moisture transfer with condensation in porous fabric at low temperature, this paper presents a new inverse problem of textile thickness determination (IPTTD). Adopting the idea of the least-squares method, we formulate the IPTTD into a function minimization problem. By means of the finite-difference method, quasi-solution method and direct search method for one-dimensional minimization problems, we construct iterative algorithms of the approximated solution for the IPTTD. Numerical simulation results validate the formulation of the IPTTD and demonstrate the effectiveness of the proposed numerical algorithms. (paper)

  10. RNA secondary structure prediction with pseudoknots: Contribution of algorithm versus energy model.

    Science.gov (United States)

    Jabbari, Hosna; Wark, Ian; Montemagno, Carlo

    2018-01-01

    RNA is a biopolymer with various applications inside the cell and in biotechnology. Structure of an RNA molecule mainly determines its function and is essential to guide nanostructure design. Since experimental structure determination is time-consuming and expensive, accurate computational prediction of RNA structure is of great importance. Prediction of RNA secondary structure is relatively simpler than its tertiary structure and provides information about its tertiary structure, therefore, RNA secondary structure prediction has received attention in the past decades. Numerous methods with different folding approaches have been developed for RNA secondary structure prediction. While methods for prediction of RNA pseudoknot-free structure (structures with no crossing base pairs) have greatly improved in terms of their accuracy, methods for prediction of RNA pseudoknotted secondary structure (structures with crossing base pairs) still have room for improvement. A long-standing question for improving the prediction accuracy of RNA pseudoknotted secondary structure is whether to focus on the prediction algorithm or the underlying energy model, as there is a trade-off on computational cost of the prediction algorithm versus the generality of the method. The aim of this work is to argue when comparing different methods for RNA pseudoknotted structure prediction, the combination of algorithm and energy model should be considered and a method should not be considered superior or inferior to others if they do not use the same scoring model. We demonstrate that while the folding approach is important in structure prediction, it is not the only important factor in prediction accuracy of a given method as the underlying energy model is also as of great value. Therefore we encourage researchers to pay particular attention in comparing methods with different energy models.

  11. ALGORITHM OF PREPARATION OF THE TRAINING SAMPLE USING 3D-FACE MODELING

    Directory of Open Access Journals (Sweden)

    D. I. Samal

    2016-01-01

    Full Text Available The algorithm of preparation and sampling for training of the multiclass qualifier of support vector machines (SVM is provided. The described approach based on the modeling of possible changes of the face features of recognized person. Additional features like perspectives of shooting, conditions of lighting, tilt angles were introduced to get improved identification results. These synthetic generated changes have some impact on the classifier learning expanding the range of possible variations of the initial image. The classifier learned with such extended example is ready to recognize unknown objects better. The age, emotional looks, turns of the head, various conditions of lighting, noise, and also some combinations of the listed parameters are chosen as the key considered parameters for modeling. The third-party software ‘FaceGen’ allowing to model up to 150 parameters and available in a demoversion for free downloading is used for 3D-modeling.The SVM classifier was chosen to test the impact of the introduced modifications of training sample. The preparation and preliminary processing of images contains the following constituents like detection and localization of area of the person on the image, assessment of an angle of rotation and an inclination, extension of the range of brightness of pixels and an equalization of the histogram to smooth the brightness and contrast characteristics of the processed images, scaling of the localized and processed area of the person, creation of a vector of features of the scaled and processed image of the person by a Principal component analysis (algorithm NIPALS, training of the multiclass SVM-classifier.The provided algorithm of expansion of the training selection is oriented to be used in practice and allows to expand using 3D-models the processed range of 2D – photographs of persons that positively affects results of identification in system of face recognition. This approach allows to compensate

  12. Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States

    Science.gov (United States)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.

    2017-01-01

    This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.

  13. A genetic-algorithm-aided stochastic optimization model for regional air quality management under uncertainty.

    Science.gov (United States)

    Qin, Xiaosheng; Huang, Guohe; Liu, Lei

    2010-01-01

    A genetic-algorithm-aided stochastic optimization (GASO) model was developed in this study for supporting regional air quality management under uncertainty. The model incorporated genetic algorithm (GA) and Monte Carlo simulation techniques into a general stochastic chance-constrained programming (CCP) framework and allowed uncertainties in simulation and optimization model parameters to be considered explicitly in the design of least-cost strategies. GA was used to seek the optimal solution of the management model by progressively evaluating the performances of individual solutions. Monte Carlo simulation was used to check the feasibility of each solution. A management problem in terms of regional air pollution control was studied to demonstrate the applicability of the proposed method. Results of the case study indicated the proposed model could effectively communicate uncertainties into the optimization process and generate solutions that contained a spectrum of potential air pollutant treatment options with risk and cost information. Decision alternatives could be obtained by analyzing tradeoffs between the overall pollutant treatment cost and the system-failure risk due to inherent uncertainties.

  14. Forecasting of ozone level in time series using MLP model with a novel hybrid training algorithm

    Science.gov (United States)

    Wang, Dong; Lu, Wei-Zhen

    As far as the impact of tropospheric ozone (O 3) on human heath and plant life are concerned, forecasting its daily maximum level is of great importance in Hong Kong as well as other metropolises in the world. This paper proposed a multi-layer perceptron (MLP) model with a novel hybrid training method to perform the forecasting task. The training method synergistically couples a stochastic particle swarm optimization (PSO) algorithm and a deterministic Levenberg-Marquardt (LM) algorithm, which aims at exploiting the advantage of both. The performance of such a hybrid model is further compared with ones obtained by the MLP model trained individually by these two training methods mentioned above. Based on original data collected from two typical monitoring sites with different O 3 formation and transportation mechanism, the simulation results show that the hybrid model is more robust and efficient than the other two models by not only producing good results during non-episodes but also providing better consistency with the original data during episodes.

  15. Choosing processor array configuration by performance modeling for a highly parallel linear algebra algorithm

    International Nuclear Information System (INIS)

    Littlefield, R.J.; Maschhoff, K.J.

    1991-04-01

    Many linear algebra algorithms utilize an array of processors across which matrices are distributed. Given a particular matrix size and a maximum number of processors, what configuration of processors, i.e., what size and shape array, will execute the fastest? The answer to this question depends on tradeoffs between load balancing, communication startup and transfer costs, and computational overhead. In this paper we analyze in detail one algorithm: the blocked factored Jacobi method for solving dense eigensystems. A performance model is developed to predict execution time as a function of the processor array and matrix sizes, plus the basic computation and communication speeds of the underlying computer system. In experiments on a large hypercube (up to 512 processors), this model has been found to be highly accurate (mean error ∼ 2%) over a wide range of matrix sizes (10 x 10 through 200 x 200) and processor counts (1 to 512). The model reveals, and direct experiment confirms, that the tradeoffs mentioned above can be surprisingly complex and counterintuitive. We propose decision procedures based directly on the performance model to choose configurations for fastest execution. The model-based decision procedures are compared to a heuristic strategy and shown to be significantly better. 7 refs., 8 figs., 1 tab

  16. Solving inverse problem for Markov chain model of customer lifetime value using flower pollination algorithm

    Science.gov (United States)

    Al-Ma'shumah, Fathimah; Permana, Dony; Sidarto, Kuntjoro Adji

    2015-12-01

    Customer Lifetime Value is an important and useful concept in marketing. One of its benefits is to help a company for budgeting marketing expenditure for customer acquisition and customer retention. Many mathematical models have been introduced to calculate CLV considering the customer retention/migration classification scheme. A fairly new class of these models which will be described in this paper uses Markov Chain Models (MCM). This class of models has the major advantage for its flexibility to be modified to several different cases/classification schemes. In this model, the probabilities of customer retention and acquisition play an important role. From Pfeifer and Carraway, 2000, the final formula of CLV obtained from MCM usually contains nonlinear form of the transition probability matrix. This nonlinearity makes the inverse problem of CLV difficult to solve. This paper aims to solve this inverse problem, yielding the approximate transition probabilities for the customers, by applying metaheuristic optimization algorithm developed by Yang, 2013, Flower Pollination Algorithm. The major interpretation of obtaining the transition probabilities are to set goals for marketing teams in keeping the relative frequencies of customer acquisition and customer retention.

  17. Modeling of Energy Demand in the Greenhouse Using PSO-GA Hybrid Algorithms

    Directory of Open Access Journals (Sweden)

    Jiaoliao Chen

    2015-01-01

    Full Text Available Modeling of energy demand in agricultural greenhouse is very important to maintain optimum inside environment for plant growth and energy consumption decreasing. This paper deals with the identification parameters for physical model of energy demand in the greenhouse using hybrid particle swarm optimization and genetic algorithms technique (HPSO-GA. HPSO-GA is developed to estimate the indistinct internal parameters of greenhouse energy model, which is built based on thermal balance. Experiments were conducted to measure environment and energy parameters in a cooling greenhouse with surface water source heat pump system, which is located in mid-east China. System identification experiments identify model parameters using HPSO-GA such as inertias and heat transfer constants. The performance of HPSO-GA on the parameter estimation is better than GA and PSO. This algorithm can improve the classification accuracy while speeding up the convergence process and can avoid premature convergence. System identification results prove that HPSO-GA is reliable in solving parameter estimation problems for modeling the energy demand in the greenhouse.

  18. Passenger route choice model and algorithm in the urban rail transit network

    Directory of Open Access Journals (Sweden)

    Ke Qiao

    2013-03-01

    Full Text Available Purpose: There are several routes between some OD pairs in the urban rail transit network. In order to carry out the fare allocating, operators use some models to estimate which route the passengers choose, but there are some errors between estimation results and actual choices results. The aim of this study is analyzing the passenger route choice behavior in detail based on passenger classification and improving the models to make the results more in line with the actual situations.Design/methodology/approach: In this paper, the passengers were divided into familiar type and strange type. Firstly passenger integrated travel impedance functions of two types were established respectively, after that a multi-route distribution model was used to get the initial route assignment results, then a ratio correction method was used to correct the results taking into account the transfer times, crowd and demand for seats. Finally, a case study for the Beijing local rail transit network is shown.Findings: The numerical example showed that it is logical to take passenger classification and the model and algorithm is effective, the final route choice results are more comprehensive and realistic.Originality/value: The paper offers an improved model and algorithm based on passenger classification for passenger route choice in the urban rail transit network.

  19. A new algorithm for least-cost path analysis by correcting digital elevation models of natural landscapes

    Science.gov (United States)

    Baek, Jieun; Choi, Yosoon

    2017-04-01

    Most algorithms for least-cost path analysis usually calculate the slope gradient between the source cell and the adjacent cells to reflect the weights for terrain slope into the calculation of travel costs. However, these algorithms have limitations that they cannot analyze the least-cost path between two cells when obstacle cells with very high or low terrain elevation exist between the source cell and the target cell. This study presents a new algorithm for least-cost path analysis by correcting digital elevation models of natural landscapes to find possible paths satisfying the constraint of maximum or minimum slope gradient. The new algorithm calculates the slope gradient between the center cell and non-adjacent cells using the concept of extended move-sets. If the algorithm finds possible paths between the center cell and non-adjacent cells with satisfying the constraint of slope condition, terrain elevation of obstacle cells existing between two cells is corrected from the digital elevation model. After calculating the cumulative travel costs to the destination by reflecting the weight of the difference between the original and corrected elevations, the algorithm analyzes the least-cost path. The results of applying the proposed algorithm to the synthetic data sets and the real-world data sets provide proof that the new algorithm can provide more accurate least-cost paths than other conventional algorithms implemented in commercial GIS software such as ArcGIS.

  20. Using genetic algorithm and TOPSIS for Xinanjiang model calibration with a single procedure

    Science.gov (United States)

    Cheng, Chun-Tian; Zhao, Ming-Yan; Chau, K. W.; Wu, Xin-Yu

    2006-01-01

    Genetic Algorithm (GA) is globally oriented in searching and thus useful in optimizing multiobjective problems, especially where the objective functions are ill-defined. Conceptual rainfall-runoff models that aim at predicting streamflow from the knowledge of precipitation over a catchment have become a basic tool for flood forecasting. The parameter calibration of a conceptual model usually involves the multiple criteria for judging the performances of observed data. However, it is often difficult to derive all objective functions for the parameter calibration problem of a conceptual model. Thus, a new method to the multiple criteria parameter calibration problem, which combines GA with TOPSIS (technique for order performance by similarity to ideal solution) for Xinanjiang model, is presented. This study is an immediate further development of authors' previous research (Cheng, C.T., Ou, C.P., Chau, K.W., 2002. Combining a fuzzy optimal model with a genetic algorithm to solve multi-objective rainfall-runoff model calibration. Journal of Hydrology, 268, 72-86), whose obvious disadvantages are to split the whole procedure into two parts and to become difficult to integrally grasp the best behaviors of model during the calibration procedure. The current method integrates the two parts of Xinanjiang rainfall-runoff model calibration together, simplifying the procedures of model calibration and validation and easily demonstrated the intrinsic phenomenon of observed data in integrity. Comparison of results with two-step procedure shows that the current methodology gives similar results to the previous method, is also feasible and robust, but simpler and easier to apply in practice.

  1. Genetic Algorithms for Optimization of Machine-learning Models and their Applications in Bioinformatics

    KAUST Repository

    Magana-Mora, Arturo

    2017-04-29

    Machine-learning (ML) techniques have been widely applied to solve different problems in biology. However, biological data are large and complex, which often result in extremely intricate ML models. Frequently, these models may have a poor performance or may be computationally unfeasible. This study presents a set of novel computational methods and focuses on the application of genetic algorithms (GAs) for the simplification and optimization of ML models and their applications to biological problems. The dissertation addresses the following three challenges. The first is to develop a generalizable classification methodology able to systematically derive competitive models despite the complexity and nature of the data. Although several algorithms for the induction of classification models have been proposed, the algorithms are data dependent. Consequently, we developed OmniGA, a novel and generalizable framework that uses different classification models in a treeXlike decision structure, along with a parallel GA for the optimization of the OmniGA structure. Results show that OmniGA consistently outperformed existing commonly used classification models. The second challenge is the prediction of translation initiation sites (TIS) in plants genomic DNA. We performed a statistical analysis of the genomic DNA and proposed a new set of discriminant features for this problem. We developed a wrapper method based on GAs for selecting an optimal feature subset, which, in conjunction with a classification model, produced the most accurate framework for the recognition of TIS in plants. Finally, results demonstrate that despite the evolutionary distance between different plants, our approach successfully identified conserved genomic elements that may serve as the starting point for the development of a generic model for prediction of TIS in eukaryotic organisms. Finally, the third challenge is the accurate prediction of polyadenylation signals in human genomic DNA. To achieve

  2. An Optimization Model and Modified Harmony Search Algorithm for Microgrid Planning with ESS

    Directory of Open Access Journals (Sweden)

    Yang Jiao

    2017-01-01

    Full Text Available To solve problems such as the high cost of microgrids (MGs, balance between supply and demand, stability of system operation, and optimizing the MG planning model, the energy storage system (ESS and harmony search algorithm (HSA are proposed. First, the conventional MG planning optimization model is constructed and the constraint conditions are defined: the supply and demand balance and reserve requirements. Second, an ESS is integrated into the optimal model of MG planning. The model with an ESS can solve and identify parameters such as the optimal power, optimal capacity, and optimal installation year. Third, the convergence speed and robustness of the ESS are optimized and improved. A case study comprising three different cases concludes the paper. The results show that the modified HSA (MHSA can effectively improve the stability and economy of MG operation with an ESS.

  3. A genetic algorithm-based job scheduling model for big data analytics.

    Science.gov (United States)

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  4. Single Channel Quantum Color Image Encryption Algorithm Based on HSI Model and Quantum Fourier Transform

    Science.gov (United States)

    Gong, Li-Hua; He, Xiang-Tao; Tan, Ru-Chao; Zhou, Zhi-Hong

    2018-01-01

    In order to obtain high-quality color images, it is important to keep the hue component unchanged while emphasize the intensity or saturation component. As a public color model, Hue-Saturation Intensity (HSI) model is commonly used in image processing. A new single channel quantum color image encryption algorithm based on HSI model and quantum Fourier transform (QFT) is investigated, where the color components of the original color image are converted to HSI and the logistic map is employed to diffuse the relationship of pixels in color components. Subsequently, quantum Fourier transform is exploited to fulfill the encryption. The cipher-text is a combination of a gray image and a phase matrix. Simulations and theoretical analyses demonstrate that the proposed single channel quantum color image encryption scheme based on the HSI model and quantum Fourier transform is secure and effective.

  5. Application of BP Neural Network Algorithm in Traditional Hydrological Model for Flood Forecasting

    Directory of Open Access Journals (Sweden)

    Jianjin Wang

    2017-01-01

    Full Text Available Flooding contributes to tremendous hazards every year; more accurate forecasting may significantly mitigate the damages and loss caused by flood disasters. Current hydrological models are either purely knowledge-based or data-driven. A combination of data-driven method (artificial neural networks in this paper and knowledge-based method (traditional hydrological model may booster simulation accuracy. In this study, we proposed a new back-propagation (BP neural network algorithm and applied it in the semi-distributed Xinanjiang (XAJ model. The improved hydrological model is capable of updating the flow forecasting error without losing the leading time. The proposed method was tested in a real case study for both single period corrections and real-time corrections. The results reveal that the proposed method could significantly increase the accuracy of flood forecasting and indicate that the global correction effect is superior to the second-order autoregressive correction method in real-time correction.

  6. Comparative evaluation of fuzzy logic and genetic algorithms models for portfolio optimization

    Directory of Open Access Journals (Sweden)

    Heidar Masoumi Soureh

    2017-03-01

    Full Text Available Selection of optimum methods which have appropriate speed and precision for planning and de-cision-making has always been a challenge for investors and managers. One the most important concerns for them is investment planning and optimization for acquisition of desirable wealth under controlled risk with the best return. This paper proposes a model based on Markowitz the-orem by considering the aforementioned limitations in order to help effective decisions-making for portfolio selection. Then, the model is investigated by fuzzy logic and genetic algorithms, for the optimization of the portfolio in selected active companies listed in Tehran Stock Exchange over the period 2012-2016 and the results of the above models are discussed. The results show that the two studied models had functional differences in portfolio optimization, its tools and the possibility of supplementing each other and their selection.

  7. Supplier selection based on a neural network model using genetic algorithm.

    Science.gov (United States)

    Golmohammadi, Davood; Creese, Robert C; Valian, Haleh; Kolassa, John

    2009-09-01

    In this paper, a decision-making model was developed to select suppliers using neural networks (NNs). This model used historical supplier performance data for selection of vendor suppliers. Input and output were designed in a unique manner for training purposes. The managers' judgments about suppliers were simulated by using a pairwise comparisons matrix for output estimation in the NN. To obtain the benefit of a search technique for model structure and training, genetic algorithm (GA) was applied for the initial weights and architecture of the network. The suppliers' database information (input) can be updated over time to change the suppliers' score estimation based on their performance. The case study illustrated shows how the model can be applied for suppliers' selection.

  8. Exploration of a physiologically-inspired hearing-aid algorithm using a computer model mimicking impaired hearing.

    Science.gov (United States)

    Jürgens, Tim; Clark, Nicholas R; Lecluyse, Wendy; Meddis, Ray

    2016-01-01

    To use a computer model of impaired hearing to explore the effects of a physiologically-inspired hearing-aid algorithm on a range of psychoacoustic measures. A computer model of a hypothetical impaired listener's hearing was constructed by adjusting parameters of a computer model of normal hearing. Absolute thresholds, estimates of compression, and frequency selectivity (summarized to a hearing profile) were assessed using this model with and without pre-processing the stimuli by a hearing-aid algorithm. The influence of different settings of the algorithm on the impaired profile was investigated. To validate the model predictions, the effect of the algorithm on hearing profiles of human impaired listeners was measured. A computer model simulating impaired hearing (total absence of basilar membrane compression) was used, and three hearing-impaired listeners participated. The hearing profiles of the model and the listeners showed substantial changes when the test stimuli were pre-processed by the hearing-aid algorithm. These changes consisted of lower absolute thresholds, steeper temporal masking curves, and sharper psychophysical tuning curves. The hearing-aid algorithm affected the impaired hearing profile of the model to approximate a normal hearing profile. Qualitatively similar results were found with the impaired listeners' hearing profiles.

  9. Modeling Self-Healing of Concrete Using Hybrid Genetic Algorithm-Artificial Neural Network.

    Science.gov (United States)

    Ramadan Suleiman, Ahmed; Nehdi, Moncef L

    2017-02-07

    This paper presents an approach to predicting the intrinsic self-healing in concrete using a hybrid genetic algorithm-artificial neural network (GA-ANN). A genetic algorithm was implemented in the network as a stochastic optimizing tool for the initial optimal weights and biases. This approach can assist the network in achieving a global optimum and avoid the possibility of the network getting trapped at local optima. The proposed model was trained and validated using an especially built database using various experimental studies retrieved from the open literature. The model inputs include the cement content, water-to-cement ratio (w/c), type and dosage of supplementary cementitious materials, bio-healing materials, and both expansive and crystalline additives. Self-healing indicated by means of crack width is the model output. The results showed that the proposed GA-ANN model is capable of capturing the complex effects of various self-healing agents (e.g., biochemical material, silica-based additive, expansive and crystalline components) on the self-healing performance in cement-based materials.

  10. A Linked Simulation-Optimization (LSO) Model for Conjunctive Irrigation Management using Clonal Selection Algorithm

    Science.gov (United States)

    Islam, Sirajul; Talukdar, Bipul

    2016-09-01

    A Linked Simulation-Optimization (LSO) model based on a Clonal Selection Algorithm (CSA) was formulated for application in conjunctive irrigation management. A series of measures were considered for reducing the computational burden associated with the LSO approach. Certain modifications were incurred to the formulated CSA, so as to decrease the number of function evaluations. In addition, a simple problem specific code for a two dimensional groundwater flow simulation model was developed. The flow model was further simplified by a novel approach of area reduction, in order to save computational time in simulation. The LSO model was applied in the irrigation command of the Pagladiya Dam Project in Assam, India. With a view to evaluate the performance of the CSA, a Genetic Algorithm (GA) was used as a comparison base. The results from the CSA compared well with those from the GA. In fact, the CSA was found to consume less computational time than the GA while converging to the optimal solution, due to the modifications incurred in it.

  11. Pressure Model of Control Valve Based on LS-SVM with the Fruit Fly Algorithm

    Directory of Open Access Journals (Sweden)

    Huang Aiqin

    2014-07-01

    Full Text Available Control valve is a kind of essential terminal control component which is hard to model by traditional methodologies because of its complexity and nonlinearity. This paper proposes a new modeling method for the upstream pressure of control valve using the least squares support vector machine (LS-SVM, which has been successfully used to identify nonlinear system. In order to improve the modeling performance, the fruit fly optimization algorithm (FOA is used to optimize two critical parameters of LS-SVM. As an example, a set of actual production data from a controlling system of chlorine in a salt chemistry industry is applied. The validity of LS-SVM modeling method using FOA is verified by comparing the predicted results with the actual data with a value of MSE 2.474 × 10−3. Moreover, it is demonstrated that the initial position of FOA does not affect its optimal ability. By comparison, simulation experiments based on PSO algorithm and the grid search method are also carried out. The results show that LS-SVM based on FOA has equal performance in prediction accuracy. However, from the respect of calculation time, FOA has a significant advantage and is more suitable for the online prediction.

  12. Modeling the cooling performance of vortex tube using a genetic algorithm-based artificial neural network

    Directory of Open Access Journals (Sweden)

    Pouraria Hassan

    2016-01-01

    Full Text Available In this study, artificial neural networks (ANNs have been used to model the effects of four important parameters consist of the ratio of the length to diameter(L/D, the ratio of the cold outlet diameter to the tube diameter(d/D, inlet pressure(P, and cold mass fraction (Y on the cooling performance of counter flow vortex tube. In this approach, experimental data have been used to train and validate the neural network model with MATLAB software. Also, genetic algorithm (GA has been used to find the optimal network architecture. In this model, temperature drop at the cold outlet has been considered as the cooling performance of the vortex tube. Based on experimental data, cooling performance of the vortex tube has been predicted by four inlet parameters (L/D, d/D, P, Y. The results of this study indicate that the genetic algorithm-based artificial neural network model is capable of predicting the cooling performance of vortex tube in a wide operating range and with satisfactory precision.

  13. Development of Predictive QSAR Models of 4-Thiazolidinones Antitrypanosomal Activity using Modern Machine Learning Algorithms.

    Science.gov (United States)

    Kryshchyshyn, Anna; Devinyak, Oleg; Kaminskyy, Danylo; Grellier, Philippe; Lesyk, Roman

    2017-11-14

    This paper presents novel QSAR models for the prediction of antitrypanosomal activity among thiazolidines and related heterocycles. The performance of four machine learning algorithms: Random Forest regression, Stochastic gradient boosting, Multivariate adaptive regression splines and Gaussian processes regression have been studied in order to reach better levels of predictivity. The results for Random Forest and Gaussian processes regression are comparable and outperform other studied methods. The preliminary descriptor selection with Boruta method improved the outcome of machine learning methods. The two novel QSAR-models developed with Random Forest and Gaussian processes regression algorithms have good predictive ability, which was proved by the external evaluation of the test set with corresponding Q 2 ext =0.812 and Q 2 ext =0.830. The obtained models can be used further for in silico screening of virtual libraries in the same chemical domain in order to find new antitrypanosomal agents. Thorough analysis of descriptors influence in the QSAR models and interpretation of their chemical meaning allows to highlight a number of structure-activity relationships. The presence of phenyl rings with electron-withdrawing atoms or groups in para-position, increased number of aromatic rings, high branching but short chains, high HOMO energy, and the introduction of 1-substituted 2-indolyl fragment into the molecular structure have been recognized as trypanocidal activity prerequisites. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Enhancements to AERMOD's building downwash algorithms based on wind-tunnel and Embedded-LES modeling

    Science.gov (United States)

    Monbureau, E. M.; Heist, D. K.; Perry, S. G.; Brouwer, L. H.; Foroutan, H.; Tang, W.

    2018-04-01

    Knowing the fate of effluent from an industrial stack is important for assessing its impact on human health. AERMOD is one of several Gaussian plume models containing algorithms to evaluate the effect of buildings on the movement of the effluent from a stack. The goal of this study is to improve AERMOD's ability to accurately model important and complex building downwash scenarios by incorporating knowledge gained from a recently completed series of wind tunnel studies and complementary large eddy simulations of flow and dispersion around simple structures for a variety of building dimensions, stack locations, stack heights, and wind angles. This study presents three modifications to the building downwash algorithm in AERMOD that improve the physical basis and internal consistency of the model, and one modification to AERMOD's building pre-processor to better represent elongated buildings in oblique winds. These modifications are demonstrated to improve the ability of AERMOD to model observed ground-level concentrations in the vicinity of a building for the variety of conditions examined in the wind tunnel and numerical studies.

  15. Trust-region based return mapping algorithm for implicit integration of elastic-plastic constitutive models

    Energy Technology Data Exchange (ETDEWEB)

    Lester, Brian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scherzinger, William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-01-19

    Here, a new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. Through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.

  16. Trust-region based return mapping algorithm for implicit integration of elastic-plastic constitutive models

    Energy Technology Data Exchange (ETDEWEB)

    Lester, Brian T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scherzinger, William M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-01-19

    A new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. As a result through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.

  17. Evaluation of Aerosol Optical Depth and Aerosol Models from VIIRS Retrieval Algorithms over North China Plain

    Directory of Open Access Journals (Sweden)

    Jun Zhu

    2017-05-01

    Full Text Available The first Visible Infrared Imaging Radiometer Suite (VIIRS was launched on Suomi National Polar-orbiting Partnership (S-NPP satellite in late 2011. Similar to the Moderate resolution Imaging Spectroradiometer (MODIS, VIIRS observes top-of-atmosphere spectral reflectance and is potentially suitable for retrieval of the aerosol optical depth (AOD. The VIIRS Environmental Data Record data (VIIRS_EDR is produced operationally by NOAA, and is based on the MODIS atmospheric correction algorithm. The “MODIS-like” VIIRS data (VIIRS_ML are being produced experimentally at NASA, from a version of the “dark-target” algorithm that is applied to MODIS. In this study, the AOD and aerosol model types from these two VIIRS retrieval algorithms over the North China Plain (NCP are evaluated using the ground-based CE318 Sunphotometer (CE318 measurements during 2 May 2012–31 March 2014 at three sites. These sites represent three different surface types: urban (Beijing, suburban (XiangHe and rural (Xinglong. Firstly, we evaluate the retrieved spectral AOD. For the three sites, VIIRS_EDR AOD at 550 nm shows a positive mean bias (MB of 0.04–0.06 and the correlation of 0.83–0.86, with the largest MB (0.10–0.15 observed in Beijing. In contrast, VIIRS_ML AOD at 550 nm has overall higher positive MB of 0.13–0.14 and a higher correlation (0.93–0.94 with CE318 AOD. Secondly, we evaluate the aerosol model types assumed by each algorithm, as well as the aerosol optical properties used in the AOD retrievals. The aerosol model used in VIIRS_EDR algorithm shows that dust and clean urban models were the dominant model types during the evaluation period. The overall accuracy rate of the aerosol model used in VIIRS_ML over NCP three sites (0.48 is higher than that of VIIRS_EDR (0.27. The differences in Single Scattering Albedo (SSA at 670 nm between VIIRS_ML and CE318 are mostly less than 0.015, but high seasonal differences are found especially over the Xinglong

  18. Effective application of improved profit-mining algorithm for the interday trading model.

    Science.gov (United States)

    Hsieh, Yu-Lung; Yang, Don-Lin; Wu, Jungpin

    2014-01-01

    Many real world applications of association rule mining from large databases help users make better decisions. However, they do not work well in financial markets at this time. In addition to a high profit, an investor also looks for a low risk trading with a better rate of winning. The traditional approach of using minimum confidence and support thresholds needs to be changed. Based on an interday model of trading, we proposed effective profit-mining algorithms which provide investors with profit rules including information about profit, risk, and winning rate. Since profit-mining in the financial market is still in its infant stage, it is important to detail the inner working of mining algorithms and illustrate the best way to apply them. In this paper we go into details of our improved profit-mining algorithm and showcase effective applications with experiments using real world trading data. The results show that our approach is practical and effective with good performance for various datasets.

  19. Effective Application of Improved Profit-Mining Algorithm for the Interday Trading Model

    Directory of Open Access Journals (Sweden)

    Yu-Lung Hsieh

    2014-01-01

    Full Text Available Many real world applications of association rule mining from large databases help users make better decisions. However, they do not work well in financial markets at this time. In addition to a high profit, an investor also looks for a low risk trading with a better rate of winning. The traditional approach of using minimum confidence and support thresholds needs to be changed. Based on an interday model of trading, we proposed effective profit-mining algorithms which provide investors with profit rules including information about profit, risk, and winning rate. Since profit-mining in the financial market is still in its infant stage, it is important to detail the inner working of mining algorithms and illustrate the best way to apply them. In this paper we go into details of our improved profit-mining algorithm and showcase effective applications with experiments using real world trading data. The results show that our approach is practical and effective with good performance for various datasets.

  20. Availability Allocation of Networked Systems Using Markov Model and Heuristics Algorithm

    Directory of Open Access Journals (Sweden)

    Ruiying Li

    2014-01-01

    Full Text Available It is a common practice to allocate the system availability goal to reliability and maintainability goals of components in the early design phase. However, the networked system availability is difficult to be allocated due to its complex topology and multiple down states. To solve these problems, a practical availability allocation method is proposed. Network reliability algebraic methods are used to derive the availability expression of the networked topology on the system level, and Markov model is introduced to determine that on the component level. A heuristic algorithm is proposed to obtain the reliability and maintainability allocation values of components. The principles applied in the AGREE reliability allocation method, proposed by the Advisory Group on Reliability of Electronic Equipment, and failure rate-based maintainability allocation method persist in our allocation method. A series system is used to verify the new algorithm, and the result shows that the allocation based on the heuristic algorithm is quite accurate compared to the traditional one. Moreover, our case study of a signaling system number 7 shows that the proposed allocation method is quite efficient for networked systems.

  1. Development of response models for the Earth Radiation Budget Experiment (ERBE) sensors. Part 4: Preliminary nonscanner models and count conversion algorithms

    Science.gov (United States)

    Halyo, Nesim; Choi, Sang H.

    1987-01-01

    Two count conversion algorithms and the associated dynamic sensor model for the M/WFOV nonscanner radiometers are defined. The sensor model provides and updates the constants necessary for the conversion algorithms, though the frequency with which these updates were needed was uncertain. This analysis therefore develops mathematical models for the conversion of irradiance at the sensor field of view (FOV) limiter into data counts, derives from this model two algorithms for the conversion of data counts to irradiance at the sensor FOV aperture and develops measurement models which account for a specific target source together with a sensor. The resulting algorithms are of the gain/offset and Kalman filter types. The gain/offset algorithm was chosen since it provided sufficient accuracy using simpler computations.

  2. The effect of different log P algorithms on the modeling of the soil sorption coefficient of nonionic pesticides.

    Science.gov (United States)

    dos Reis, Ralpho Rinaldo; Sampaio, Silvio César; de Melo, Eduardo Borges

    2013-10-01

    Collecting data on the effects of pesticides on the environment is a slow and costly process. Therefore, significant efforts have been focused on the development of models that predict physical, chemical or biological properties of environmental interest. The soil sorption coefficient normalized to the organic carbon content (Koc) is a key parameter that is used in environmental risk assessments. Thus, several log Koc prediction models that use the hydrophobic parameter log P as a descriptor have been reported in the literature. Often, algorithms are used to calculate the value of log P due to the lack of experimental values for this property. Despite the availability of various algorithms, previous studies fail to describe the procedure used to select the appropriate algorithm. In this study, models that correlate log Koc with log P were developed for a heterogeneous group of nonionic pesticides using different freeware algorithms. The statistical qualities and predictive power of all of the models were evaluated. Thus, this study was conducted to assess the effect of the log P algorithm choice on log Koc modeling. The results clearly demonstrate that the lack of a selection criterion may result in inappropriate prediction models. Seven algorithms were tested, of which only two (ALOGPS and KOWWIN) produced good results. A sensible choice may result in simple models with statistical qualities and predictive power values that are comparable to those of more complex models. Therefore, the selection of the appropriate log P algorithm for modeling log Koc cannot be arbitrary but must be based on the chemical structure of compounds and the characteristics of the available algorithms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    International Nuclear Information System (INIS)

    Nishizuka, N.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.; Sugiura, K.

    2017-01-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  4. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    Science.gov (United States)

    Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.

    2017-02-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010-2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite. We detected active regions (ARs) from the full-disk magnetogram, from which ˜60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  5. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    Energy Technology Data Exchange (ETDEWEB)

    Nishizuka, N.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M. [Applied Electromagnetic Research Institute, National Institute of Information and Communications Technology, 4-2-1, Nukui-Kitamachi, Koganei, Tokyo 184-8795 (Japan); Sugiura, K., E-mail: nishizuka.naoto@nict.go.jp [Advanced Speech Translation Research and Development Promotion Center, National Institute of Information and Communications Technology (Japan)

    2017-02-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  6. A new free-surface stabilization algorithm for geodynamical modelling: Theory and numerical tests

    Science.gov (United States)

    Andrés-Martínez, Miguel; Morgan, Jason P.; Pérez-Gussinyé, Marta; Rüpke, Lars

    2015-09-01

    The surface of the solid Earth is effectively stress free in its subaerial portions, and hydrostatic beneath the oceans. Unfortunately, this type of boundary condition is difficult to treat computationally, and for computational convenience, numerical models have often used simpler approximations that do not involve a normal stress-loaded, shear-stress free top surface that is free to move. Viscous flow models with a computational free surface typically confront stability problems when the time step is bigger than the viscous relaxation time. The small time step required for stability (develop strategies that mitigate the stability problem by making larger (at least ∼10 Kyr) time steps stable and accurate. Here we present a new free-surface stabilization algorithm for finite element codes which solves the stability problem by adding to the Stokes formulation an intrinsic penalization term equivalent to a portion of the future load at the surface nodes. Our algorithm is straightforward to implement and can be used with both Eulerian or Lagrangian grids. It includes α and β parameters to respectively control both the vertical and the horizontal slope-dependent penalization terms, and uses Uzawa-like iterations to solve the resulting system at a cost comparable to a non-stress free surface formulation. Four tests were carried out in order to study the accuracy and the stability of the algorithm: (1) a decaying first-order sinusoidal topography test, (2) a decaying high-order sinusoidal topography test, (3) a Rayleigh-Taylor instability test, and (4) a steep-slope test. For these tests, we investigate which α and β parameters give the best results in terms of both accuracy and stability. We also compare the accuracy and the stability of our algorithm with a similar implicit approach recently developed by Kaus et al. (2010). We find that our algorithm is slightly more accurate and stable for steep slopes, and also conclude that, for longer time steps, the optimal

  7. Models and algorithm of optimization launch and deployment of virtual network functions in the virtual data center

    Science.gov (United States)

    Bolodurina, I. P.; Parfenov, D. I.

    2017-10-01

    The goal of our investigation is optimization of network work in virtual data center. The advantage of modern infrastructure virtualization lies in the possibility to use software-defined networks. However, the existing optimization of algorithmic solutions does not take into account specific features working with multiple classes of virtual network functions. The current paper describes models characterizing the basic structures of object of virtual data center. They including: a level distribution model of software-defined infrastructure virtual data center, a generalized model of a virtual network function, a neural network model of the identification of virtual network functions. We also developed an efficient algorithm for the optimization technology of containerization of virtual network functions in virtual data center. We propose an efficient algorithm for placing virtual network functions. In our investigation we also generalize the well renowned heuristic and deterministic algorithms of Karmakar-Karp.

  8. Multi-sources model and control algorithm of an energy management system for light electric vehicles

    International Nuclear Information System (INIS)

    Hannan, M.A.; Azidin, F.A.; Mohamed, A.

    2012-01-01

    Highlights: ► An energy management system (EMS) is developed for a scooter under normal and heavy power load conditions. ► The battery, FC, SC, EMS, DC machine and vehicle dynamics are modeled and designed for the system. ► State-based logic control algorithms provide an efficient and feasible multi-source EMS for light electric vehicles. ► Vehicle’s speed and power are closely matched with the ECE-47 driving cycle under normal and heavy load conditions. ► Sources of energy changeover occurred at 50% of the battery state of charge level in heavy load conditions. - Abstract: This paper presents the multi-sources energy models and ruled based feedback control algorithm of an energy management system (EMS) for light electric vehicle (LEV), i.e., scooters. The multiple sources of energy, such as a battery, fuel cell (FC) and super-capacitor (SC), EMS and power controller, DC machine and vehicle dynamics are designed and modeled using MATLAB/SIMULINK. The developed control strategies continuously support the EMS of the multiple sources of energy for a scooter under normal and heavy power load conditions. The performance of the proposed system is analyzed and compared with that of the ECE-47 test drive cycle in terms of vehicle speed and load power. The results show that the designed vehicle’s speed and load power closely match those of the ECE-47 test driving cycle under normal and heavy load conditions. This study’s results suggest that the proposed control algorithm provides an efficient and feasible EMS for LEV.

  9. On models of the genetic code generated by binary dichotomic algorithms.

    Science.gov (United States)

    Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz

    2015-02-01

    In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models

    KAUST Repository

    Vignal, Philippe

    2016-02-11

    Phase-field models are emerging as a promising strategy to simulate interfacial phenomena. Rather than tracking interfaces explicitly as done in sharp interface descriptions, these models use a diffuse order parameter to monitor interfaces implicitly. This implicit description, as well as solid physical and mathematical footings, allow phase-field models to overcome problems found by predecessors. Nonetheless, the method has significant drawbacks. The phase-field framework relies on the solution of high-order, nonlinear partial differential equations. Solving these equations entails a considerable computational cost, so finding efficient strategies to handle them is important. Also, standard discretization strategies can many times lead to incorrect solutions. This happens because, for numerical solutions to phase-field equations to be valid, physical conditions such as mass conservation and free energy monotonicity need to be guaranteed. In this work, we focus on the development of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure evolution. The algorithm developed conserves, guarantees energy stability and is second order accurate in time. The second part of the thesis presents two numerical schemes that generalize literature regarding energy-stable methods for conserved and non-conserved phase-field models. The time discretization strategies can conserve mass if needed, are energy-stable, and second order accurate in time. We also develop an adaptive time-stepping strategy, which can be applied to any second-order accurate scheme. This time-adaptive strategy relies on a backward approximation to give an accurate error estimator. The spatial discretization, in both parts, relies on a mixed finite element formulation and isogeometric analysis. The codes are

  11. SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation

    International Nuclear Information System (INIS)

    Yao, W; Farr, J

    2015-01-01

    Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MC simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations

  12. Nonlinear inversion of resistivity sounding data for 1-D earth models using the Neighbourhood Algorithm

    Science.gov (United States)

    Ojo, A. O.; Xie, Jun; Olorunfemi, M. O.

    2018-01-01

    To reduce ambiguity related to nonlinearities in the resistivity model-data relationships, an efficient direct-search scheme employing the Neighbourhood Algorithm (NA) was implemented to solve the 1-D resistivity problem. In addition to finding a range of best-fit models which are more likely to be global minimums, this method investigates the entire multi-dimensional model space and provides additional information about the posterior model covariance matrix, marginal probability density function and an ensemble of acceptable models. This provides new insights into how well the model parameters are constrained and make assessing trade-offs between them possible, thus avoiding some common interpretation pitfalls. The efficacy of the newly developed program is tested by inverting both synthetic (noisy and noise-free) data and field data from other authors employing different inversion methods so as to provide a good base for comparative performance. In all cases, the inverted model parameters were in good agreement with the true and recovered model parameters from other methods and remarkably correlate with the available borehole litho-log and known geology for the field dataset. The NA method has proven to be useful whilst a good starting model is not available and the reduced number of unknowns in the 1-D resistivity inverse problem makes it an attractive alternative to the linearized methods. Hence, it is concluded that the newly developed program offers an excellent complementary tool for the global inversion of the layered resistivity structure.

  13. Bilevel Traffic Evacuation Model and Algorithm Design for Large-Scale Activities

    Directory of Open Access Journals (Sweden)

    Danwen Bao

    2017-01-01

    Full Text Available This paper establishes a bilevel planning model with one master and multiple slaves to solve traffic evacuation problems. The minimum evacuation network saturation and shortest evacuation time are used as the objective functions for the upper- and lower-level models, respectively. The optimizing conditions of this model are also analyzed. An improved particle swarm optimization (PSO method is proposed by introducing an electromagnetism-like mechanism to solve the bilevel model and enhance its convergence efficiency. A case study is carried out using the Nanjing Olympic Sports Center. The results indicate that, for large-scale activities, the average evacuation time of the classic model is shorter but the road saturation distribution is more uneven. Thus, the overall evacuation efficiency of the network is not high. For induced emergencies, the evacuation time of the bilevel planning model is shortened. When the audience arrival rate is increased from 50% to 100%, the evacuation time is shortened from 22% to 35%, indicating that the optimization effect of the bilevel planning model is more effective compared to the classic model. Therefore, the model and algorithm presented in this paper can provide a theoretical basis for the traffic-induced evacuation decision making of large-scale activities.

  14. Application of random number generators in genetic algorithms to improve rainfall-runoff modelling

    Science.gov (United States)

    Chlumecký, Martin; Buchtele, Josef; Richta, Karel

    2017-10-01

    The efficient calibration of rainfall-runoff models is a difficult issue, even for experienced hydrologists. Therefore, fast and high-quality model calibration is a valuable improvement. This paper describes a novel methodology and software for the optimisation of a rainfall-runoff modelling using a genetic algorithm (GA) with a newly prepared concept of a random number generator (HRNG), which is the core of the optimisation. The GA estimates model parameters using evolutionary principles, which requires a quality number generator. The new HRNG generates random numbers based on hydrological information and it provides better numbers compared to pure software generators. The GA enhances the model calibration very well and the goal is to optimise the calibration of the model with a minimum of user interaction. This article focuses on improving the internal structure of the GA, which is shielded from the user. The results that we obtained indicate that the HRNG provides a stable trend in the output quality of the model, despite various configurations of the GA. In contrast to previous research, the HRNG speeds up the calibration of the model and offers an improvement of rainfall-runoff modelling.

  15. Application of random number generators in genetic algorithms to improve rainfall-runoff modelling

    Czech Academy of Sciences Publication Activity Database

    Chlumecký, M.; Buchtele, Josef; Richta, K.

    2017-01-01

    Roč. 553, October (2017), s. 350-355 ISSN 0022-1694 Institutional support: RVO:67985874 Keywords : genetic algorithm * optimisation * rainfall-runoff modeling * random generator Subject RIV: DA - Hydrology ; Limnology OBOR OECD: Hydrology Impact factor: 3.483, year: 2016 https://ac.els-cdn.com/S0022169417305516/1-s2.0-S0022169417305516-main.pdf?_tid=fa1bad8a-bd6a-11e7-8567-00000aab0f27&acdnat=1509365462_a1335d3d997e9eab19e23b1eee977705

  16. Generalized random walk algorithm for the numerical modeling of complex diffusion processes

    CERN Document Server

    Vamos, C; Vereecken, H

    2003-01-01

    A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested.

  17. Optimizing bi-objective, multi-echelon supply chain model using particle swarm intelligence algorithm

    Science.gov (United States)

    Sathish Kumar, V. R.; Anbuudayasankar, S. P.; Rameshkumar, K.

    2018-02-01

    In the current globalized scenario, business organizations are more dependent on cost effective supply chain to enhance profitability and better handle competition. Demand uncertainty is an important factor in success or failure of a supply chain. An efficient supply chain limits the stock held at all echelons to the extent of avoiding a stock-out situation. In this paper, a three echelon supply chain model consisting of supplier, manufacturing plant and market is developed and the same is optimized using particle swarm intelligence algorithm.

  18. 11th Biennial Conference on Emerging Mathematical Methods, Models and Algorithms for Science and Technology

    CERN Document Server

    Manchanda, Pammy; Bhardwaj, Rashmi

    2015-01-01

    The present volume contains invited talks of 11th biennial conference on “Emerging Mathematical Methods, Models and Algorithms for Science and Technology”. The main message of the book is that mathematics has a great potential to analyse and understand the challenging problems of nanotechnology, biotechnology, medical science, oil industry and financial technology. The book highlights all the features and main theme discussed in the conference. All contributing authors are eminent academicians, scientists, researchers and scholars in their respective fields, hailing from around the world.

  19. A Meshless Algorithm to Model Field Evaporation in Atom Probe Tomography.

    Science.gov (United States)

    Rolland, Nicolas; Vurpillot, François; Duguay, Sébastien; Blavette, Didier

    2015-12-01

    An alternative approach for simulating the field evaporation process in atom probe tomography is presented. The model uses the electrostatic Robin's equation to directly calculate charge distribution over the tip apex conducting surface, without the need for a supporting mesh. The partial ionization state of the surface atoms is at the core of the method. Indeed, each surface atom is considered as a point charge, which is representative of its evaporation probability. The computational efficiency is ensured by an adapted version of the Barnes-Hut N-body problem algorithm. Standard desorption maps for cubic structures are presented in order to demonstrate the effectiveness of the method.

  20. A Taxonomy for Modeling Flexibility and a Computationally Efficient Algorithm for Dispatch in Smart Grids

    DEFF Research Database (Denmark)

    Petersen, Mette Højgaard; Edlund, Kristian; Hansen, Lars Henrik

    2013-01-01

    The word flexibility is central to Smart Grid literature, but still a formal definition of flexibility is pending. This paper present a taxonomy for flexibility modeling denoted Buckets, Batteries and Bakeries. We consider a direct control Virtual Power Plant (VPP), which is given the task...... of servicing a portfolio of flexible consumers by use of a fluctuating power supply. Based on the developed taxonomy we first prove that no causal optimal dispatch strategies exist for the considered problem. We then present two heuristic algorithms for solving the balancing task: Predictive Balancing...