WorldWideScience

Sample records for model iu algorithms

  1. Multiagent scheduling models and algorithms

    CERN Document Server

    Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur

    2014-01-01

    This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.

  2. Algorithmic Issues in Modeling Motion

    DEFF Research Database (Denmark)

    Agarwal, P. K; Guibas, L. J; Edelsbrunner, H.

    2003-01-01

    This article is a survey of research areas in which motion plays a pivotal role. The aim of the article is to review current approaches to modeling motion together with related data structures and algorithms, and to summarize the challenges that lie ahead in producing a more unified theory...

  3. Direct Model Checking Matrix Algorithm

    Institute of Scientific and Technical Information of China (English)

    Zhi-Hong Tao; Hans Kleine Büning; Li-Fu Wang

    2006-01-01

    During the last decade, Model Checking has proven its efficacy and power in circuit design, network protocol analysis and bug hunting. Recent research on automatic verification has shown that no single model-checking technique has the edge over all others in all application areas. So, it is very difficult to determine which technique is the most suitable for a given model. It is thus sensible to apply different techniques to the same model. However, this is a very tedious and time-consuming task, for each algorithm uses its own description language. Applying Model Checking in software design and verification has been proved very difficult. Software architectures (SA) are engineering artifacts that provide high-level and abstract descriptions of complex software systems. In this paper a Direct Model Checking (DMC) method based on Kripke Structure and Matrix Algorithm is provided. Combined and integrated with domain specific software architecture description languages (ADLs), DMC can be used for computing consistency and other critical properties.

  4. Complex fluids modeling and algorithms

    CERN Document Server

    Saramito, Pierre

    2016-01-01

    This book presents a comprehensive overview of the modeling of complex fluids, including many common substances, such as toothpaste, hair gel, mayonnaise, liquid foam, cement and blood, which cannot be described by Navier-Stokes equations. It also offers an up-to-date mathematical and numerical analysis of the corresponding equations, as well as several practical numerical algorithms and software solutions for the approximation of the solutions. It discusses industrial (molten plastics, forming process), geophysical (mud flows, volcanic lava, glaciers and snow avalanches), and biological (blood flows, tissues) modeling applications. This book is a valuable resource for undergraduate students and researchers in applied mathematics, mechanical engineering and physics.

  5. Comparison of 5 IU and 10 IU tuberculin test results in patients on chronic dialysis

    Directory of Open Access Journals (Sweden)

    H Tayebi Khosroshahi

    2012-01-01

    Full Text Available Immunocompromised patients such as those with end-stage kidney failure under-going hemodialysis (HD are at increased risk of developing tuberculosis (TB. For this reason, routine TB screening of HD patients with tuberculin test has been recommended. The Center for Disease Control and Prevention (CDC has recommended that patients with chronic renal failure should undergo annual skin testing for TB with tuberculin [purified protein derivative (PPD], with an induration of ≥10 mm at 48 h depicting a positive reaction. The aim of this study was to compare the results of two different doses of PPD in dialysis patients. This descriptive and comparative multicenter study was performed on 255 patients on chronic dialysis in Tabriz, Iran. These patients did not have the PPD test done within the preceding one year. Patients were divided into two groups randomly and conventional or double-dose tuberculin test was performed using the Mantoux technique with 5 IU (group 1 and 10 IU (group 2 of PPD. Results were interpreted 48-72 h after injection. Patients were divided into those with less than 10 mm and those with ≥10 mm duration. Mean age was 44.6 ± 15 years (M/F = 1.5/1. The mean duration on dialysis was 39 ± 7 months. There was no significant difference regarding the age, gender, dura-tion on dialysis, causes of chronic kidney disease, erythrocyte sedimentation rate, C-reactive protein and serum albumin between the two groups. The mean induration was 4.6 mm and 7.7 mm in groups 1 and 2, respectively. There was induration ≥10 mm in 19.6% and 25.5% of group 1 and 2, respectively, which showed a significant difference (P <0.05. In conclusion, because of the high frequency of TB in dialysis patients, an annual tuberculin test may be recommended. Our study showed that the double-dose tuberculin test may be a better substitute to the conventional test in dialysis patients.

  6. Modeling and Engineering Algorithms for Mobile Data

    DEFF Research Database (Denmark)

    Blunck, Henrik; Hinrichs, Klaus; Sondern, Joëlle;

    2006-01-01

    In this paper, we present an object-oriented approach to modeling mobile data and algorithms operating on such data. Our model is general enough to capture any kind of continuous motion while at the same time allowing for encompassing algorithms optimized for specific types of motion. Such motion...

  7. LCD motion blur: modeling, analysis, and algorithm.

    Science.gov (United States)

    Chan, Stanley H; Nguyen, Truong Q

    2011-08-01

    Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms.

  8. A Topological Model for Parallel Algorithm Design

    Science.gov (United States)

    1991-09-01

    New York, 1989. 108. J. Dugundji . Topology . Allen and Bacon, Rockleigh, NJ, 1966. 109. R. Duncan. A Survey of Parallel Computer Architectures. IEEE...Approved for public release; distribition unlimited 4N1f-e AFIT/DS/ENG/91-02 A TOPOLOGICAL MODEL FOR PARALLEL ALGORITHM DESIGN DISSERTATION Presented to...DC 20503. 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS A Topological Model For Parallel Algorithm Design 6. AUTHOR(S) Jeffrey A Simmers, Captain, USAF 7

  9. Carbon export algorithm advancements in models

    Science.gov (United States)

    Çağlar Yumruktepe, Veli; Salihoğlu, Barış

    2015-04-01

    The rate at which anthropogenic CO2 is absorbed by the oceans remains a critical question under investigation by climate researchers. Construction of a complete carbon budget, requires better understanding of air-sea exchanges and the processes controlling the vertical and horizontal transport of carbon in the ocean, particularly the biological carbon pump. Improved parameterization of carbon sequestration within ecosystem models is vital to better understand and predict changes in the global carbon cycle. Due to the complexity of processes controlling particle aggregation, sinking and decomposition, existing ecosystem models necessarily parameterize carbon sequestration using simple algorithms. Development of improved algorithms describing carbon export and sequestration, suitable for inclusion in numerical models is an ongoing work. Existing unique algorithms used in the state-of-the art ecosystem models and new experimental results obtained from mesocosm experiments and open ocean observations have been inserted into a common 1D pelagic ecosystem model for testing purposes. The model was implemented to the timeseries stations in the North Atlantic (BATS, PAP and ESTOC) and were evaluated with datasets of carbon export. Targetted topics of algorithms were PFT functional types, grazing and vertical movement of zooplankton, and remineralization, aggregation and ballasting dynamics of organic matter. Ultimately it is intended to feed improved algorithms to the 3D modelling community, for inclusion in coupled numerical models.

  10. Adaptive Genetic Algorithm Model for Intrusion Detection

    Directory of Open Access Journals (Sweden)

    K. S. Anil Kumar

    2012-09-01

    Full Text Available Intrusion detection systems are intelligent systems designed to identify and prevent the misuse of computer networks and systems. Various approaches to Intrusion Detection are currently being used, but they are relatively ineffective. Thus the emerging network security systems need be part of the life system and this ispossible only by embedding knowledge into the network. The Adaptive Genetic Algorithm Model - IDS comprising of K-Means clustering Algorithm, Genetic Algorithm and Neural Network techniques. Thetechnique is tested using multitude of background knowledge sets in DARPA network traffic datasets.

  11. Graphical model construction based on evolutionary algorithms

    Institute of Scientific and Technical Information of China (English)

    Youlong YANG; Yan WU; Sanyang LIU

    2006-01-01

    Using Bayesian networks to model promising solutions from the current population of the evolutionary algorithms can ensure efficiency and intelligence search for the optimum. However, to construct a Bayesian network that fits a given dataset is a NP-hard problem, and it also needs consuming mass computational resources. This paper develops a methodology for constructing a graphical model based on Bayesian Dirichlet metric. Our approach is derived from a set of propositions and theorems by researching the local metric relationship of networks matching dataset. This paper presents the algorithm to construct a tree model from a set of potential solutions using above approach. This method is important not only for evolutionary algorithms based on graphical models, but also for machine learning and data mining.The experimental results show that the exact theoretical results and the approximations match very well.

  12. Model Checking Algorithms for CTMDPs

    DEFF Research Database (Denmark)

    Buchholz, Peter; Hahn, Ernst Moritz; Hermanns, Holger

    2011-01-01

    Continuous Stochastic Logic (CSL) can be interpreted over continuoustime Markov decision processes (CTMDPs) to specify quantitative properties of stochastic systems that allow some external control. Model checking CSL formulae over CTMDPs requires then the computation of optimal control strategie...

  13. Model Checking Algorithms for CTMDPs

    DEFF Research Database (Denmark)

    Buchholz, Peter; Hahn, Ernst Moritz; Hermanns, Holger

    2011-01-01

    Continuous Stochastic Logic (CSL) can be interpreted over continuoustime Markov decision processes (CTMDPs) to specify quantitative properties of stochastic systems that allow some external control. Model checking CSL formulae over CTMDPs requires then the computation of optimal control strategie...

  14. Models and Algorithm for Stochastic Network Designs

    Institute of Scientific and Technical Information of China (English)

    Anthony Chen; Juyoung Kim; Seungjae Lee; Jaisung Choi

    2009-01-01

    The network design problem (NDP) is one of the most difficult and challenging problems in trans-portation. Traditional NDP models are often posed as a deterministic bilevel program assuming that all rele-vant inputs are known with certainty. This paper presents three stochastic models for designing transporta-tion networks with demand uncertainty. These three stochastic NDP models were formulated as the ex-pected value model, chance-constrained model, and dependent-chance model in a bilevel programming framework using different criteria to hedge against demand uncertainty. Solution procedures based on the traffic assignment algorithm, genetic algorithm, and Monte-Cado simulations were developed to solve these stochastic NDP models. The nonlinear and nonconvex nature of the bilevel program was handled by the genetic algorithm and traffic assignment algorithm, whereas the stochastic nature was addressed through simulations. Numerical experiments were conducted to evaluate the applicability of the stochastic NDP models and the solution procedure. Results from the three experiments show that the solution procedures are quite robust to different parameter settings.

  15. Fuzzy audit risk modeling algorithm

    Directory of Open Access Journals (Sweden)

    Zohreh Hajihaa

    2011-07-01

    Full Text Available Fuzzy logic has created suitable mathematics for making decisions in uncertain environments including professional judgments. One of the situations is to assess auditee risks. During recent years, risk based audit (RBA has been regarded as one of the main tools to fight against fraud. The main issue in RBA is to determine the overall audit risk an auditor accepts, which impact the efficiency of an audit. The primary objective of this research is to redesign the audit risk model (ARM proposed by auditing standards. The proposed model of this paper uses fuzzy inference systems (FIS based on the judgments of audit experts. The implementation of proposed fuzzy technique uses triangular fuzzy numbers to express the inputs and Mamdani method along with center of gravity are incorporated for defuzzification. The proposed model uses three FISs for audit, inherent and control risks, and there are five levels of linguistic variables for outputs. FISs include 25, 25 and 81 rules of if-then respectively and officials of Iranian audit experts confirm all the rules.

  16. Dynamic exponents for potts model cluster algorithms

    Science.gov (United States)

    Coddington, Paul D.; Baillie, Clive F.

    We have studied the Swendsen-Wang and Wolff cluster update algorithms for the Ising model in 2, 3 and 4 dimensions. The data indicate simple relations between the specific heat and the Wolff autocorrelations, and between the magnetization and the Swendsen-Wang autocorrelations. This implies that the dynamic critical exponents are related to the static exponents of the Ising model. We also investigate the possibility of similar relationships for the Q-state Potts model.

  17. A Generic Design Model for Evolutionary Algorithms

    Institute of Scientific and Technical Information of China (English)

    He Feng; Kang Li-shan; Chen Yu-ping

    2003-01-01

    A generic design model for evolutionary algo rithms is proposed in this paper. The model, which was described by UML in details, focuses on the key concepts and mechanisms in evolutionary algorithms. The model not only achieves separation of concerns and encapsulation of implementations by classification and abstraction of those concepts,it also has a flexible architecture due to the application of design patterns. As a result, the model is reusable, extendible,easy to understand, easy to use, and easy to test. A large number of experiments applying the model to solve many different problems adequately illustrate the generality and effectivity of the model.

  18. Model based development of engine control algorithms

    NARCIS (Netherlands)

    Dekker, H.J.; Sturm, W.L.

    1996-01-01

    Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed b

  19. Worm algorithm for the CPN−1 model

    Directory of Open Access Journals (Sweden)

    Tobias Rindlisbacher

    2017-05-01

    Full Text Available The CPN−1 model in 2D is an interesting toy model for 4D QCD as it possesses confinement, asymptotic freedom and a non-trivial vacuum structure. Due to the lower dimensionality and the absence of fermions, the computational cost for simulating 2D CPN−1 on the lattice is much lower than that for simulating 4D QCD. However, to our knowledge, no efficient algorithm for simulating the lattice CPN−1 model for N>2 has been tested so far, which also works at finite density. To this end we propose a new type of worm algorithm which is appropriate to simulate the lattice CPN−1 model in a dual, flux-variables based representation, in which the introduction of a chemical potential does not give rise to any complications. In addition to the usual worm moves where a defect is just moved from one lattice site to the next, our algorithm additionally allows for worm-type moves in the internal variable space of single links, which accelerates the Monte Carlo evolution. We use our algorithm to compare the two popular CPN−1 lattice actions and exhibit marked differences in their approach to the continuum limit.

  20. Algorithms and Models for the Web Graph

    NARCIS (Netherlands)

    Gleich, David F.; Komjathy, Julia; Litvak, Nelly

    2015-01-01

    This volume contains the papers presented at WAW2015, the 12th Workshop on Algorithms and Models for the Web-Graph held during December 10–11, 2015, in Eindhoven. There were 24 submissions. Each submission was reviewed by at least one, and on average two, Program Committee members. The committee dec

  1. Algorithms and Models for the Web Graph

    NARCIS (Netherlands)

    Gleich, David F.; Komjathy, Julia; Litvak, Nelli

    2015-01-01

    This volume contains the papers presented at WAW2015, the 12th Workshop on Algorithms and Models for the Web-Graph held during December 10–11, 2015, in Eindhoven. There were 24 submissions. Each submission was reviewed by at least one, and on average two, Program Committee members. The committee

  2. Optimization in engineering models and algorithms

    CERN Document Server

    Sioshansi, Ramteen

    2017-01-01

    This textbook covers the fundamentals of optimization, including linear, mixed-integer linear, nonlinear, and dynamic optimization techniques, with a clear engineering focus. It carefully describes classical optimization models and algorithms using an engineering problem-solving perspective, and emphasizes modeling issues using many real-world examples related to a variety of application areas. Providing an appropriate blend of practical applications and optimization theory makes the text useful to both practitioners and students, and gives the reader a good sense of the power of optimization and the potential difficulties in applying optimization to modeling real-world systems. The book is intended for undergraduate and graduate-level teaching in industrial engineering and other engineering specialties. It is also of use to industry practitioners, due to the inclusion of real-world applications, opening the door to advanced courses on both modeling and algorithm development within the industrial engineering ...

  3. Weekly Fleet Assignment Model and Algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHU Xing-hui; ZHU Jin-fu; GONG Zai-wu

    2007-01-01

    A 0-1 integer programming model for weekly fleet assignment was put forward based on linear network and weekly flight scheduling in China. In this model, the objective function is to maximize the total profit of fleet assignment, subject to the constraints of coverage, aircraft flow balance, fleet size, aircraft availability, aircraft usage, flight restriction, aircraft seat capacity,and stopover. Then the branch-and-bound algorithm based on special ordered set was applied to solve the model. At last, a realworld case study on an airline with 5 fleets, 48 aircrafts and 1 786 flight legs indicated that the profit increase was $1591276 one week and the running time was no more than 4 min, which shows that the model and algorithm are fairly good for domestic airline.

  4. Computational Granular Dynamics Models and Algorithms

    CERN Document Server

    Pöschel, Thorsten

    2005-01-01

    Computer simulations not only belong to the most important methods for the theoretical investigation of granular materials, but also provide the tools that have enabled much of the expanding research by physicists and engineers. The present book is intended to serve as an introduction to the application of numerical methods to systems of granular particles. Accordingly, emphasis is placed on a general understanding of the subject rather than on the presentation of the latest advances in numerical algorithms. Although a basic knowledge of C++ is needed for the understanding of the numerical methods and algorithms in the book, it avoids usage of elegant but complicated algorithms to remain accessible for those who prefer to use a different programming language. While the book focuses more on models than on the physics of granular material, many applications to real systems are presented.

  5. Efficient Algorithms for Parsing the DOP Model

    CERN Document Server

    Goodman, J

    1996-01-01

    Excellent results have been reported for Data-Oriented Parsing (DOP) of natural language texts (Bod, 1993). Unfortunately, existing algorithms are both computationally intensive and difficult to implement. Previous algorithms are expensive due to two factors: the exponential number of rules that must be generated and the use of a Monte Carlo parsing algorithm. In this paper we solve the first problem by a novel reduction of the DOP model to a small, equivalent probabilistic context-free grammar. We solve the second problem by a novel deterministic parsing strategy that maximizes the expected number of correct constituents, rather than the probability of a correct parse tree. Using the optimizations, experiments yield a 97% crossing brackets rate and 88% zero crossing brackets rate. This differs significantly from the results reported by Bod, and is comparable to results from a duplication of Pereira and Schabes's (1992) experiment on the same data. We show that Bod's results are at least partially due to an e...

  6. Markov chains models, algorithms and applications

    CERN Document Server

    Ching, Wai-Ki; Ng, Michael K; Siu, Tak-Kuen

    2013-01-01

    This new edition of Markov Chains: Models, Algorithms and Applications has been completely reformatted as a text, complete with end-of-chapter exercises, a new focus on management science, new applications of the models, and new examples with applications in financial risk management and modeling of financial data.This book consists of eight chapters.  Chapter 1 gives a brief introduction to the classical theory on both discrete and continuous time Markov chains. The relationship between Markov chains of finite states and matrix theory will also be highlighted. Some classical iterative methods

  7. Genetic Algorithm Based Microscale Vehicle Emissions Modelling

    Directory of Open Access Journals (Sweden)

    Sicong Zhu

    2015-01-01

    Full Text Available There is a need to match emission estimations accuracy with the outputs of transport models. The overall error rate in long-term traffic forecasts resulting from strategic transport models is likely to be significant. Microsimulation models, whilst high-resolution in nature, may have similar measurement errors if they use the outputs of strategic models to obtain traffic demand predictions. At the microlevel, this paper discusses the limitations of existing emissions estimation approaches. Emission models for predicting emission pollutants other than CO2 are proposed. A genetic algorithm approach is adopted to select the predicting variables for the black box model. The approach is capable of solving combinatorial optimization problems. Overall, the emission prediction results reveal that the proposed new models outperform conventional equations in terms of accuracy and robustness.

  8. Fast Algorithms for Model-Based Diagnosis

    Science.gov (United States)

    Fijany, Amir; Barrett, Anthony; Vatan, Farrokh; Mackey, Ryan

    2005-01-01

    Two improved new methods for automated diagnosis of complex engineering systems involve the use of novel algorithms that are more efficient than prior algorithms used for the same purpose. Both the recently developed algorithms and the prior algorithms in question are instances of model-based diagnosis, which is based on exploring the logical inconsistency between an observation and a description of a system to be diagnosed. As engineering systems grow more complex and increasingly autonomous in their functions, the need for automated diagnosis increases concomitantly. In model-based diagnosis, the function of each component and the interconnections among all the components of the system to be diagnosed (for example, see figure) are represented as a logical system, called the system description (SD). Hence, the expected behavior of the system is the set of logical consequences of the SD. Faulty components lead to inconsistency between the observed behaviors of the system and the SD. The task of finding the faulty components (diagnosis) reduces to finding the components, the abnormalities of which could explain all the inconsistencies. Of course, the meaningful solution should be a minimal set of faulty components (called a minimal diagnosis), because the trivial solution, in which all components are assumed to be faulty, always explains all inconsistencies. Although the prior algorithms in question implement powerful methods of diagnosis, they are not practical because they essentially require exhaustive searches among all possible combinations of faulty components and therefore entail the amounts of computation that grow exponentially with the number of components of the system.

  9. Load-balancing algorithms for climate models

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.T.; Toonen, B.R.

    1994-06-01

    Implementations of climate models on scalable parallel computer systems can suffer from load imbalances due to temporal and spatial variations in the amount of computation required for physical parameterizations such as solar radiation and convective adjustment. We have developed specialized techniques for correcting such imbalances. These techniques are incorporated in a general-purpose, programmable load-balancing library that allows the mapping of computation to processors to be specified as a series of maps generated by a programmer-supplied load-balancing module. The communication required to move from one map to another is performed automatically by the library, without programmer intervention. In this paper, we de scribe the load-balancing problem and the techniques that we have developed to solve it. We also describe specific load-balancing algorithms that we have developed for PCCM2, a scalable parallel implementation of the community Climate Model, and present experimental results that demonstrate the effectiveness of these algorithms on parallel computers.

  10. Load-balancing algorithms for climate models

    Science.gov (United States)

    Foster, I. T.; Toonen, B. R.

    Implementations of climate models on scalable parallel computer systems can suffer from load imbalances due to temporal and spatial variations in the amount of computation required for physical parameterizations such as solar radiation and convective adjustment. We have developed specialized techniques for correcting such imbalances. These techniques are incorporated in a general-purpose, programmable load-balancing library that allows the mapping of computation to processors to be specified as a series of maps generated by a programmer-supplied load-balancing module. The communication required to move from one map to another is performed automatically by the library, without programmer intervention. In this paper, we describe the load-balancing problem and the techniques that we have developed to solve it. We also describe specific load-balancing algorithms that we have developed for PCCM2, a scalable parallel implementation of the community climate model, and present experimental results that demonstrate the effectiveness of these algorithms on parallel computers.

  11. Synaptic dynamics: linear model and adaptation algorithm.

    Science.gov (United States)

    Yousefi, Ali; Dibazar, Alireza A; Berger, Theodore W

    2014-08-01

    In this research, temporal processing in brain neural circuitries is addressed by a dynamic model of synaptic connections in which the synapse model accounts for both pre- and post-synaptic processes determining its temporal dynamics and strength. Neurons, which are excited by the post-synaptic potentials of hundred of the synapses, build the computational engine capable of processing dynamic neural stimuli. Temporal dynamics in neural models with dynamic synapses will be analyzed, and learning algorithms for synaptic adaptation of neural networks with hundreds of synaptic connections are proposed. The paper starts by introducing a linear approximate model for the temporal dynamics of synaptic transmission. The proposed linear model substantially simplifies the analysis and training of spiking neural networks. Furthermore, it is capable of replicating the synaptic response of the non-linear facilitation-depression model with an accuracy better than 92.5%. In the second part of the paper, a supervised spike-in-spike-out learning rule for synaptic adaptation in dynamic synapse neural networks (DSNN) is proposed. The proposed learning rule is a biologically plausible process, and it is capable of simultaneously adjusting both pre- and post-synaptic components of individual synapses. The last section of the paper starts with presenting the rigorous analysis of the learning algorithm in a system identification task with hundreds of synaptic connections which confirms the learning algorithm's accuracy, repeatability and scalability. The DSNN is utilized to predict the spiking activity of cortical neurons and pattern recognition tasks. The DSNN model is demonstrated to be a generative model capable of producing different cortical neuron spiking patterns and CA1 Pyramidal neurons recordings. A single-layer DSNN classifier on a benchmark pattern recognition task outperforms a 2-Layer Neural Network and GMM classifiers while having fewer numbers of free parameters and

  12. Evolutionary algorithms in genetic regulatory networks model

    CERN Document Server

    Raza, Khalid

    2012-01-01

    Genetic Regulatory Networks (GRNs) plays a vital role in the understanding of complex biological processes. Modeling GRNs is significantly important in order to reveal fundamental cellular processes, examine gene functions and understanding their complex relationships. Understanding the interactions between genes gives rise to develop better method for drug discovery and diagnosis of the disease since many diseases are characterized by abnormal behaviour of the genes. In this paper we have reviewed various evolutionary algorithms-based approach for modeling GRNs and discussed various opportunities and challenges.

  13. Sparse modeling theory, algorithms, and applications

    CERN Document Server

    Rish, Irina

    2014-01-01

    ""A comprehensive, clear, and well-articulated book on sparse modeling. This book will stand as a prime reference to the research community for many years to come.""-Ricardo Vilalta, Department of Computer Science, University of Houston""This book provides a modern introduction to sparse methods for machine learning and signal processing, with a comprehensive treatment of both theory and algorithms. Sparse Modeling is an ideal book for a first-year graduate course.""-Francis Bach, INRIA - École Normale Supřieure, Paris

  14. Multiscale modeling for classification of SAR imagery using hybrid EM algorithm and genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    Xianbin Wen; Hua Zhang; Jianguang Zhang; Xu Jiao; Lei Wang

    2009-01-01

    A novel method that hybridizes genetic algorithm (GA) and expectation maximization (EM) algorithm for the classification of syn-thetic aperture radar (SAR) imagery is proposed by the finite Gaussian mixtures model (GMM) and multiscale autoregressive (MAR)model. This algorithm is capable of improving the global optimality and consistency of the classification performance. The experiments on the SAR images show that the proposed algorithm outperforms the standard EM method significantly in classification accuracy.

  15. A new efficient Cluster Algorithm for the Ising Model

    CERN Document Server

    Nyffeler, M; Wiese, U J; Nyfeler, Matthias; Pepe, Michele; Wiese, Uwe-Jens

    2005-01-01

    Using D-theory we construct a new efficient cluster algorithm for the Ising model. The construction is very different from the standard Swendsen-Wang algorithm and related to worm algorithms. With the new algorithm we have measured the correlation function with high precision over a surprisingly large number of orders of magnitude.

  16. Link mining models, algorithms, and applications

    CERN Document Server

    Yu, Philip S; Faloutsos, Christos

    2010-01-01

    This book presents in-depth surveys and systematic discussions on models, algorithms and applications for link mining. Link mining is an important field of data mining. Traditional data mining focuses on 'flat' data in which each data object is represented as a fixed-length attribute vector. However, many real-world data sets are much richer in structure, involving objects of multiple types that are related to each other. Hence, recently link mining has become an emerging field of data mining, which has a high impact in various important applications such as text mining, social network analysi

  17. Genetic Algorithms Principles Towards Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Nabil M. Hewahi

    2011-10-01

    Full Text Available In this paper we propose a general approach based on Genetic Algorithms (GAs to evolve Hidden Markov Models (HMM. The problem appears when experts assign probability values for HMM, they use only some limited inputs. The assigned probability values might not be accurate to serve in other cases related to the same domain. We introduce an approach based on GAs to find
    out the suitable probability values for the HMM to be mostly correct in more cases than what have been used to assign the probability values.

  18. Models and Algorithms for Tracking Target with Coordinated Turn Motion

    Directory of Open Access Journals (Sweden)

    Xianghui Yuan

    2014-01-01

    Full Text Available Tracking target with coordinated turn (CT motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT model with known turn rate, augmented coordinated turn (ACT model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM framework, the algorithm based on expectation maximization (EM algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM algorithm, the EM algorithm shows its effectiveness.

  19. A Multiple Model Approach to Modeling Based on LPF Algorithm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Input-output data fitting methods are often used for unknown-structure nonlinear system modeling. Based on model-on-demand tactics, a multiple model approach to modeling for nonlinear systems is presented. The basic idea is to find out, from vast historical system input-output data sets, some data sets matching with the current working point, then to develop a local model using Local Polynomial Fitting (LPF) algorithm. With the change of working points, multiple local models are built, which realize the exact modeling for the global system. By comparing to other methods, the simulation results show good performance for its simple, effective and reliable estimation.``

  20. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  1. Warehouse Optimization Model Based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Guofeng Qin

    2013-01-01

    Full Text Available This paper takes Bao Steel logistics automated warehouse system as an example. The premise is to maintain the focus of the shelf below half of the height of the shelf. As a result, the cost time of getting or putting goods on the shelf is reduced, and the distance of the same kind of goods is also reduced. Construct a multiobjective optimization model, using genetic algorithm to optimize problem. At last, we get a local optimal solution. Before optimization, the average cost time of getting or putting goods is 4.52996 s, and the average distance of the same kinds of goods is 2.35318 m. After optimization, the average cost time is 4.28859 s, and the average distance is 1.97366 m. After analysis, we can draw the conclusion that this model can improve the efficiency of cargo storage.

  2. Adaptive Numerical Algorithms in Space Weather Modeling

    Science.gov (United States)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  3. Dynamical behavior of the Niedermayer algorithm applied to Potts models

    OpenAIRE

    Girardi, D.; Penna, T. J. P.; Branco, N. S.

    2012-01-01

    In this work we make a numerical study of the dynamic universality class of the Niedermayer algorithm applied to the two-dimensional Potts model with 2, 3, and 4 states. This algorithm updates clusters of spins and has a free parameter, $E_0$, which controls the size of these clusters, such that $E_0=1$ is the Metropolis algorithm and $E_0=0$ regains the Wolff algorithm, for the Potts model. For $-1

  4. A genetic algorithm for solving supply chain network design model

    Science.gov (United States)

    Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A.

    2013-09-01

    Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance.

  5. Genetic Algorithm Approaches to Prebiobiotic Chemistry Modeling

    Science.gov (United States)

    Lohn, Jason; Colombano, Silvano

    1997-01-01

    We model an artificial chemistry comprised of interacting polymers by specifying two initial conditions: a distribution of polymers and a fixed set of reversible catalytic reactions. A genetic algorithm is used to find a set of reactions that exhibit a desired dynamical behavior. Such a technique is useful because it allows an investigator to determine whether a specific pattern of dynamics can be produced, and if it can, the reaction network found can be then analyzed. We present our results in the context of studying simplified chemical dynamics in theorized protocells - hypothesized precursors of the first living organisms. Our results show that given a small sample of plausible protocell reaction dynamics, catalytic reaction sets can be found. We present cases where this is not possible and also analyze the evolved reaction sets.

  6. Bayesian online algorithms for learning in discrete Hidden Markov Models

    OpenAIRE

    Alamino, Roberto C.; Caticha, Nestor

    2008-01-01

    We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.

  7. Bouc–Wen hysteresis model identification using Modified Firefly Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Zaman, Mohammad Asif, E-mail: zaman@stanford.edu [Department of Electrical Engineering, Stanford University (United States); Sikder, Urmita [Department of Electrical Engineering and Computer Sciences, University of California, Berkeley (United States)

    2015-12-01

    The parameters of Bouc–Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc–Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc–Wen model parameters. Finally, the proposed method is used to find the Bouc–Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data. - Highlights: • We describe a new method to find the Bouc–Wen hysteresis model parameters. • We propose a Modified Firefly Algorithm. • We compare our method with existing methods to find that the proposed method performs better. • We use our model to fit experimental results. Good agreement is found.

  8. Motion Model Employment using interacting Motion Model Algorithm

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar

    2006-01-01

    The paper presents a simulation study to track a maneuvering target using a selective approach in choosing Interacting Multiple Models (IMM) algorithm to provide a wider coverage to track such targets.  Initially, there are two motion models in the system to track a target.  Probability of each...... model being correct is computed through a likelihood function for each model.  The study presented a simple technique to introduce additional models into the system using deterministic acceleration which basically defines the dynamics of the system.  Therefore, based on this value more motion models can...... be employed to increase the coverage.  Finally, the combined estimate is obtained using posteriori probabilities from different filter models.   The implemented approach provides an adaptive scheme for selecting various number of motion models.  Motion model description is important as it defines the kind...

  9. Algorithm for Realistic Modeling of Graphitic Systems

    Directory of Open Access Journals (Sweden)

    A.V. Khomenko

    2011-01-01

    Full Text Available An algorithm for molecular dynamics simulations of graphitic systems using realistic semiempirical interaction potentials of carbon atoms taking into account both short-range and long-range contributions is proposed. Results of the use of the algorithm for a graphite sample are presented. The scalability of the algorithm depending on the system size and the number of processor cores involved in the calculations is analyzed.

  10. Modeling of higher order systems using artificial bee colony algorithm

    Directory of Open Access Journals (Sweden)

    Aytekin Bağış

    2016-05-01

    Full Text Available In this work, modeling of the higher order systems based on the use of the artificial bee colony (ABC algorithm were examined. Proposed model parameters for the sample systems in the literature were obtained by using the algorithm, and its performance was presented comparatively with the other methods. Simulation results show that the ABC algorithm based system modeling approach can be used as an efficient and powerful method for higher order systems.

  11. A randomised controlled trial of oxytocin 5IU and placebo infusion versus oxytocin 5IU and 30IU infusion for the control of blood loss at elective caesarean section--pilot study. ISRCTN 40302163.

    LENUS (Irish Health Repository)

    Murphy, Deirdre J

    2012-02-01

    OBJECTIVE: To compare the blood loss at elective lower segment caesarean section with administration of oxytocin 5IU bolus versus oxytocin 5IU bolus and oxytocin 30IU infusion and to establish whether a large multi-centre trial is feasible. STUDY DESIGN: Women booked for an elective caesarean section were recruited to a pilot randomised controlled trial and randomised to either oxytocin 5IU bolus and placebo infusion or oxytocin 5IU bolus and oxytocin 30IU infusion. We wished to establish whether the study design was feasible and acceptable and to establish sample size estimates for a definitive multi-centre trial. The outcome measures were total estimated blood loss at caesarean section and in the immediate postpartum period and the need for an additional uterotonic agent. RESULTS: A total of 115 women were randomised and 110 were suitable for analysis (5 protocol violations). Despite strict exclusion criteria 84% of the target population were considered eligible for study participation and of those approached only 15% declined to participate and 11% delivered prior to the planned date. The total mean estimated blood loss was lower in the oxytocin infusion arm compared to placebo (567 ml versus 624 ml) and fewer women had a major haemorrhage (>1000 ml, 14% versus 17%) or required an additional uterotonic agent (5% versus 11%). A sample size of 1500 in each arm would be required to demonstrate a 3% absolute reduction in major haemorrhage (from baseline 10%) with >80% power. CONCLUSION: An additional oxytocin infusion at elective caesarean section may reduce blood loss and warrants evaluation in a large multi-centre trial.

  12. Polynomial search and global modeling: Two algorithms for modeling chaos.

    Science.gov (United States)

    Mangiarotti, S; Coudret, R; Drapeau, L; Jarlan, L

    2012-10-01

    Global modeling aims to build mathematical models of concise description. Polynomial Model Search (PoMoS) and Global Modeling (GloMo) are two complementary algorithms (freely downloadable at the following address: http://www.cesbio.ups-tlse.fr/us/pomos_et_glomo.html) designed for the modeling of observed dynamical systems based on a small set of time series. Models considered in these algorithms are based on ordinary differential equations built on a polynomial formulation. More specifically, PoMoS aims at finding polynomial formulations from a given set of 1 to N time series, whereas GloMo is designed for single time series and aims to identify the parameters for a selected structure. GloMo also provides basic features to visualize integrated trajectories and to characterize their structure when it is simple enough: One allows for drawing the first return map for a chosen Poincaré section in the reconstructed space; another one computes the Lyapunov exponent along the trajectory. In the present paper, global modeling from single time series is considered. A description of the algorithms is given and three examples are provided. The first example is based on the three variables of the Rössler attractor. The second one comes from an experimental analysis of the copper electrodissolution in phosphoric acid for which a less parsimonious global model was obtained in a previous study. The third example is an exploratory case and concerns the cycle of rainfed wheat under semiarid climatic conditions as observed through a vegetation index derived from a spatial sensor.

  13. Critical dynamics of cluster algorithms in the dilute Ising model

    Science.gov (United States)

    Hennecke, M.; Heyken, U.

    1993-08-01

    Autocorrelation times for thermodynamic quantities at T C are calculated from Monte Carlo simulations of the site-diluted simple cubic Ising model, using the Swendsen-Wang and Wolff cluster algorithms. Our results show that for these algorithms the autocorrelation times decrease when reducing the concentration of magnetic sites from 100% down to 40%. This is of crucial importance when estimating static properties of the model, since the variances of these estimators increase with autocorrelation time. The dynamical critical exponents are calculated for both algorithms, observing pronounced finite-size effects in the energy autocorrelation data for the algorithm of Wolff. We conclude that, when applied to the dilute Ising model, cluster algorithms become even more effective than local algorithms, for which increasing autocorrelation times are expected.

  14. Performance analysis of FXLMS algorithm with secondary path modeling error

    Institute of Scientific and Technical Information of China (English)

    SUN Xu; CHEN Duanshi

    2003-01-01

    Performance analysis of filtered-X LMS (FXLMS) algorithm with secondary path modeling error is carried out in both time and frequency domain. It is shown firstly that the effects of secondary path modeling error on the performance of FXLMS algorithm are determined by the distribution of the relative error of secondary path model along with frequency.In case of that the distribution of relative error is uniform the modeling error of secondary path will have no effects on the performance of the algorithm. In addition, a limitation property of FXLMS algorithm is proved, which implies that the negative effects of secondary path modeling error can be compensated by increasing the adaptive filter length. At last, some insights into the "spillover" phenomenon of FXLMS algorithm are given.

  15. Fireworks algorithm for mean-VaR/CVaR models

    Science.gov (United States)

    Zhang, Tingting; Liu, Zhifeng

    2017-10-01

    Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field.

  16. Kriging-approximation simulated annealing algorithm for groundwater modeling

    Science.gov (United States)

    Shen, C. H.

    2015-12-01

    Optimization algorithms are often applied to search best parameters for complex groundwater models. Running the complex groundwater models to evaluate objective function might be time-consuming. This research proposes a Kriging-approximation simulated annealing algorithm. Kriging is a spatial statistics method used to interpolate unknown variables based on surrounding given data. In the algorithm, Kriging method is used to estimate complicate objective function and is incorporated with simulated annealing. The contribution of the Kriging-approximation simulated annealing algorithm is to reduce calculation time and increase efficiency.

  17. Engineering of Algorithms for Hidden Markov models and Tree Distances

    DEFF Research Database (Denmark)

    Sand, Andreas

    grown exponentially because of drastic improvements in the technology behind DNA and RNA sequencing, and focus on the research field has increased due to its potential to expand our knowledge about biological mechanisms and to improve public health. There has therefore been a continuously growing demand...... of the algorithms to exploit the parallel architecture of modern computers. In this PhD dissertation, I present my work with algorithmic optimizations and parallelizations in primarily two areas in algorithmic bioinformatics: algorithms for analyzing hidden Markov models and algorithms for computing distance...... measures between phylogenetic trees. Hidden Markov models is a class of probabilistic models that is used in a number of core applications in bioinformatics such as modeling of proteins, gene finding and reconstruction of species and population histories. I show how a relatively simple parallelization can...

  18. Model-Free Adaptive Control Algorithm with Data Dropout Compensation

    OpenAIRE

    Xuhui Bu; Fashan Yu; Zhongsheng Hou; Hongwei Zhang

    2012-01-01

    The convergence of model-free adaptive control (MFAC) algorithm can be guaranteed when the system is subject to measurement data dropout. The system output convergent speed gets slower as dropout rate increases. This paper proposes a MFAC algorithm with data compensation. The missing data is first estimated using the dynamical linearization method, and then the estimated value is introduced to update control input. The convergence analysis of the proposed MFAC algorithm is given, and the effe...

  19. Performance modeling and prediction for linear algebra algorithms

    OpenAIRE

    Iakymchuk, Roman

    2012-01-01

    This dissertation incorporates two research projects: performance modeling and prediction for dense linear algebra algorithms, and high-performance computing on clouds. The first project is focused on dense matrix computations, which are often used as computational kernels for numerous scientific applications. To solve a particular mathematical operation, linear algebra libraries provide a variety of algorithms. The algorithm of choice depends, obviously, on its performance. Performance of su...

  20. DEVELOPMENT OF 2D HUMAN BODY MODELING USING THINNING ALGORITHM

    Directory of Open Access Journals (Sweden)

    K. Srinivasan

    2010-11-01

    Full Text Available Monitoring the behavior and activities of people in Video surveillance has gained more applications in Computer vision. This paper proposes a new approach to model the human body in 2D view for the activity analysis using Thinning algorithm. The first step of this work is Background subtraction which is achieved by the frame differencing algorithm. Thinning algorithm has been used to find the skeleton of the human body. After thinning, the thirteen feature points like terminating points, intersecting points, shoulder, elbow, and knee points have been extracted. Here, this research work attempts to represent the body model in three different ways such as Stick figure model, Patch model and Rectangle body model. The activities of humans have been analyzed with the help of 2D model for the pre-defined poses from the monocular video data. Finally, the time consumption and efficiency of our proposed algorithm have been evaluated.

  1. Methodology and basic algorithms of the Livermore Economic Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    Bell, R.B.

    1981-03-17

    The methodology and the basic pricing algorithms used in the Livermore Economic Modeling System (EMS) are described. The report explains the derivations of the EMS equations in detail; however, it could also serve as a general introduction to the modeling system. A brief but comprehensive explanation of what EMS is and does, and how it does it is presented. The second part examines the basic pricing algorithms currently implemented in EMS. Each algorithm's function is analyzed and a detailed derivation of the actual mathematical expressions used to implement the algorithm is presented. EMS is an evolving modeling system; improvements in existing algorithms are constantly under development and new submodels are being introduced. A snapshot of the standard version of EMS is provided and areas currently under study and development are considered briefly.

  2. Models and algorithms for biomolecules and molecular networks

    CERN Document Server

    DasGupta, Bhaskar

    2016-01-01

    By providing expositions to modeling principles, theories, computational solutions, and open problems, this reference presents a full scope on relevant biological phenomena, modeling frameworks, technical challenges, and algorithms. * Up-to-date developments of structures of biomolecules, systems biology, advanced models, and algorithms * Sampling techniques for estimating evolutionary rates and generating molecular structures * Accurate computation of probability landscape of stochastic networks, solving discrete chemical master equations * End-of-chapter exercises

  3. Model-Free Adaptive Control Algorithm with Data Dropout Compensation

    Directory of Open Access Journals (Sweden)

    Xuhui Bu

    2012-01-01

    Full Text Available The convergence of model-free adaptive control (MFAC algorithm can be guaranteed when the system is subject to measurement data dropout. The system output convergent speed gets slower as dropout rate increases. This paper proposes a MFAC algorithm with data compensation. The missing data is first estimated using the dynamical linearization method, and then the estimated value is introduced to update control input. The convergence analysis of the proposed MFAC algorithm is given, and the effectiveness is also validated by simulations. It is shown that the proposed algorithm can compensate the effect of the data dropout, and the better output performance can be obtained.

  4. A motion retargeting algorithm based on model simplification

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A new motion retargeting algorithm is presented, which adapts the motion capture data to a new character. To make the resulting motion realistic, the physically-based optimization method is adopted. However, the optimization process is difficult to converge to the optimal value because of high complexity of the physical human model. In order to address this problem, an appropriate simplified model automatically determined by a motion analysis technique is utilized, and then motion retargeting with this simplified model as an intermediate agent is implemented. The entire motion retargeting algorithm involves three steps of nonlinearly constrained optimization: forward retargeting, motion scaling and inverse retargeting. Experimental results show the validity of this algorithm.

  5. Quantum Monte Carlo methods algorithms for lattice models

    CERN Document Server

    Gubernatis, James; Werner, Philipp

    2016-01-01

    Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...

  6. An Automatic Registration Algorithm for 3D Maxillofacial Model

    Science.gov (United States)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  7. Syn-Extensional Constrictional Folding of the Gwoira Rider Block, a Large Fault-Bounded Slice Atop the Mai'iu Low-Angle Normal Fault, Woodlark Rift.

    Science.gov (United States)

    Little, T. A.; Webber, S. M.; Norton, K. P.; Mizera, M.; Oesterle, J.; Ellis, S. M.

    2016-12-01

    uppermost part of a LANF, Coulomb fault mechanical analysis (after Choi and Buck, 2012) can be applied to field observations to provide an upper limit on LANF frictional strength (µf). Modelling constrains the µf for the Mai'iu Fault to ≤0.25, which suggests that the Mai'iu Fault is frictionally very weak.

  8. New Model and Algorithm for Hardware/Software Partitioning

    Institute of Scientific and Technical Information of China (English)

    Ji-Gang Wu; Thambipillai Srikanthan; Guang-Wei Zou

    2008-01-01

    This paper focuses on the algorithmic aspects for the hardware/software (HW/SW) partitioning which searches a reasonable composition of hardware and software components which not only satisfies the constraint of hardware area but also optimizes the execution time. The computational model is extended so that all possible types of communications can be taken into account for the HW/SW partitioning. Also, a new dynamic programming algorithm is proposed on the basis of the computational model, in which source data, rather than speedup in previous work, of basic scheduling blocks are directly utilized to calculate the optimal solution. The proposed algorithm runs in O(n. A) for n code fragments and the available hardware area A. Simulation results show that the proposed algorithm solves the HW/SW partitioning without increase in running time, compared with the algorithm cited in the literature.

  9. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well...... for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...

  10. A Mining Algorithm for Extracting Decision Process Data Models

    Directory of Open Access Journals (Sweden)

    Cristina-Claudia DOLEAN

    2011-01-01

    Full Text Available The paper introduces an algorithm that mines logs of user interaction with simulation software. It outputs a model that explicitly shows the data perspective of the decision process, namely the Decision Data Model (DDM. In the first part of the paper we focus on how the DDM is extracted by our mining algorithm. We introduce it as pseudo-code and, then, provide explanations and examples of how it actually works. In the second part of the paper, we use a series of small case studies to prove the robustness of the mining algorithm and how it deals with the most common patterns we found in real logs.

  11. Efficient Cluster Algorithm for CP(N-1) Models

    CERN Document Server

    Beard, B B; Riederer, S; Wiese, U J

    2006-01-01

    Despite several attempts, no efficient cluster algorithm has been constructed for CP(N-1) models in the standard Wilson formulation of lattice field theory. In fact, there is a no-go theorem that prevents the construction of an efficient Wolff-type embedding algorithm. In this paper, we construct an efficient cluster algorithm for ferromagnetic SU(N)-symmetric quantum spin systems. Such systems provide a regularization for CP(N-1) models in the framework of D-theory. We present detailed studies of the autocorrelations and find a dynamical critical exponent that is consistent with z = 0.

  12. Efficient cluster algorithm for CP(N-1) models

    Science.gov (United States)

    Beard, B. B.; Pepe, M.; Riederer, S.; Wiese, U.-J.

    2006-11-01

    Despite several attempts, no efficient cluster algorithm has been constructed for CP(N-1) models in the standard Wilson formulation of lattice field theory. In fact, there is a no-go theorem that prevents the construction of an efficient Wolff-type embedding algorithm. In this paper, we construct an efficient cluster algorithm for ferromagnetic SU(N)-symmetric quantum spin systems. Such systems provide a regularization for CP(N-1) models in the framework of D-theory. We present detailed studies of the autocorrelations and find a dynamical critical exponent that is consistent with z=0.

  13. Petri net model for analysis of concurrently processed complex algorithms

    Science.gov (United States)

    Stoughton, John W.; Mielke, Roland R.

    1986-01-01

    This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.

  14. A simple algorithm for optimization and model fitting: AGA (asexual genetic algorithm)

    Science.gov (United States)

    Cantó, J.; Curiel, S.; Martínez-Gómez, E.

    2009-07-01

    Context: Mathematical optimization can be used as a computational tool to obtain the optimal solution to a given problem in a systematic and efficient way. For example, in twice-differentiable functions and problems with no constraints, the optimization consists of finding the points where the gradient of the objective function is zero and using the Hessian matrix to classify the type of each point. Sometimes, however it is impossible to compute these derivatives and other type of techniques must be employed such as the steepest descent/ascent method and more sophisticated methods such as those based on the evolutionary algorithms. Aims: We present a simple algorithm based on the idea of genetic algorithms (GA) for optimization. We refer to this algorithm as AGA (asexual genetic algorithm) and apply it to two kinds of problems: the maximization of a function where classical methods fail and model fitting in astronomy. For the latter case, we minimize the chi-square function to estimate the parameters in two examples: the orbits of exoplanets by taking a set of radial velocity data, and the spectral energy distribution (SED) observed towards a YSO (Young Stellar Object). Methods: The algorithm AGA may also be called genetic, although it differs from standard genetic algorithms in two main aspects: a) the initial population is not encoded; and b) the new generations are constructed by asexual reproduction. Results: Applying our algorithm in optimizing some complicated functions, we find the global maxima within a few iterations. For model fitting to the orbits of exoplanets and the SED of a YSO, we estimate the parameters and their associated errors.

  15. Quantitative Methods in Supply Chain Management Models and Algorithms

    CERN Document Server

    Christou, Ioannis T

    2012-01-01

    Quantitative Methods in Supply Chain Management presents some of the most important methods and tools available for modeling and solving problems arising in the context of supply chain management. In the context of this book, “solving problems” usually means designing efficient algorithms for obtaining high-quality solutions. The first chapter is an extensive optimization review covering continuous unconstrained and constrained linear and nonlinear optimization algorithms, as well as dynamic programming and discrete optimization exact methods and heuristics. The second chapter presents time-series forecasting methods together with prediction market techniques for demand forecasting of new products and services. The third chapter details models and algorithms for planning and scheduling with an emphasis on production planning and personnel scheduling. The fourth chapter presents deterministic and stochastic models for inventory control with a detailed analysis on periodic review systems and algorithmic dev...

  16. An efficient Cellular Potts Model algorithm that forbids cell fragmentation

    Science.gov (United States)

    Durand, Marc; Guesnet, Etienne

    2016-11-01

    The Cellular Potts Model (CPM) is a lattice based modeling technique which is widely used for simulating cellular patterns such as foams or biological tissues. Despite its realism and generality, the standard Monte Carlo algorithm used in the scientific literature to evolve this model preserves connectivity of cells on a limited range of simulation temperature only. We present a new algorithm in which cell fragmentation is forbidden for all simulation temperatures. This allows to significantly enhance realism of the simulated patterns. It also increases the computational efficiency compared with the standard CPM algorithm even at same simulation temperature, thanks to the time spared in not doing unrealistic moves. Moreover, our algorithm restores the detailed balance equation, ensuring that the long-term stage is independent of the chosen acceptance rate and chosen path in the temperature space.

  17. Dynamical behavior of the Niedermayer algorithm applied to Potts models

    Science.gov (United States)

    Girardi, D.; Penna, T. J. P.; Branco, N. S.

    2012-08-01

    In this work, we make a numerical study of the dynamic universality class of the Niedermayer algorithm applied to the two-dimensional Potts model with 2, 3, and 4 states. This algorithm updates clusters of spins and has a free parameter, E0, which controls the size of these clusters, such that E0=1 is the Metropolis algorithm and E0=0 regains the Wolff algorithm, for the Potts model. For -1clusters of equal spins can be formed: we show that the mean size of the clusters of (possibly) turned spins initially grows with the linear size of the lattice, L, but eventually saturates at a given lattice size L˜, which depends on E0. For L≥L˜, the Niedermayer algorithm is in the same dynamic universality class of the Metropolis one, i.e, they have the same dynamic exponent. For E0>0, spins in different states may be added to the cluster but the dynamic behavior is less efficient than for the Wolff algorithm (E0=0). Therefore, our results show that the Wolff algorithm is the best choice for Potts models, when compared to the Niedermayer's generalization.

  18. Transmission function models of finite population genetic algorithms

    NARCIS (Netherlands)

    Kemenade, C.H.M. van; Kok, J.N.; La Poutré, J.A.; Thierens, D.

    1998-01-01

    Infinite population models show a deterministic behaviour. Genetic algorithms with finite populations behave non-deterministicly. For small population sizes, the results obtained with these models differ strongly from the results predicted by the infinite population model. When the population size i

  19. DiamondTorre Algorithm for High-Performance Wave Modeling

    Directory of Open Access Journals (Sweden)

    Vadim Levchenko

    2016-08-01

    Full Text Available Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.

  20. Models and algorithms for stochastic online scheduling

    NARCIS (Netherlands)

    Megow, N.; Uetz, Marc Jochen; Vredeveld, T.

    We consider a model for scheduling under uncertainty. In this model, we combine the main characteristics of online and stochastic scheduling in a simple and natural way. Job processing times are assumed to be stochastic, but in contrast to traditional stochastic scheduling models, we assume that

  1. A NEW GENETIC SIMULATED ANNEALING ALGORITHM FOR FLOOD ROUTING MODEL

    Institute of Scientific and Technical Information of China (English)

    KANG Ling; WANG Cheng; JIANG Tie-bing

    2004-01-01

    In this paper, a new approach, the Genetic Simulated Annealing (GSA), was proposed for optimizing the parameters in the Muskingum routing model. By integrating the simulated annealing method into the genetic algorithm, the hybrid method could avoid some troubles of traditional methods, such as arduous trial-and-error procedure, premature convergence in genetic algorithm and search blindness in simulated annealing. The principle and implementing procedure of this algorithm were described. Numerical experiments show that the GSA can adjust the optimization population, prevent premature convergence and seek the global optimal result.Applications to the Nanyunhe River and Qingjiang River show that the proposed approach is of higher forecast accuracy and practicability.

  2. Implementing Modifed Burg Algorithms in Multivariate Subset Autoregressive Modeling

    Directory of Open Access Journals (Sweden)

    A. Alexandre Trindade

    2003-02-01

    Full Text Available The large number of parameters in subset vector autoregressive models often leads one to procure fast, simple, and efficient alternatives or precursors to maximum likelihood estimation. We present the solution of the multivariate subset Yule-Walker equations as one such alternative. In recent work, Brockwell, Dahlhaus, and Trindade (2002, show that the Yule-Walker estimators can actually be obtained as a special case of a general recursive Burg-type algorithm. We illustrate the structure of this Algorithm, and discuss its implementation in a high-level programming language. Applications of the Algorithm in univariate and bivariate modeling are showcased in examples. Univariate and bivariate versions of the Algorithm written in Fortran 90 are included in the appendix, and their use illustrated.

  3. The Cosparse Analysis Model and Algorithms

    CERN Document Server

    Nam, Sangnam; Elad, Michael; Gribonval, Rémi

    2011-01-01

    After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to the synthesis alternative, is markedly different. Surprisingly, the analysis model did not get a similar attention, and its understanding today is shallow and partial. In this paper we take a closer look at the analysis approach, better define it as a generative model for signals, and contrast it with the synthesis one. This work proposes effective pursuit methods that aim to solve inverse problems regularized with the analysis-model prior, accompanied by a preliminary theoretical study of their performance. We demonstrate the effectiveness of the analysis model in several experiments.

  4. Application of firefly algorithm to the dynamic model updating problem

    Science.gov (United States)

    Shabbir, Faisal; Omenzetter, Piotr

    2015-04-01

    Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.

  5. Multiple QoS modeling and algorithm in computational grid

    Institute of Scientific and Technical Information of China (English)

    Li Chunlin; Feng Meilai; Li Layuan

    2007-01-01

    Multiple QoS modeling and algorithm in grid system is considered.Grid QoS requirements can be formulated as a utility function for each task as a weighted sum of its each dimensional QoS utility functions.Multiple QoS constraint resource scheduling optimization in computational grid is distributed to two subproblems: optimization of grid user and grid resource provider.Grid QoS scheduling can be achieved by solving sub problems via an iterative algorithm.

  6. A LOAD BALANCING MODEL USING FIREFLY ALGORITHM IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    A. Paulin Florence

    2014-01-01

    Full Text Available Cloud computing is a model that points at streamlining the on-demand provisioning of software, hardware and data as services and providing end-users with flexible and scalable services accessible through the Internet. The main objective of the proposed approach is to maximize the resource utilization and provide a good balanced load among all the resources in cloud servers. Initially, a load model of every resource will be derived based on several factors such as, memory usage, processing time and access rate. Based on the newly derived load index, the current load will be computed for all the resources shared in virtual machine of cloud servers. Once the load index is computed for all the resources, load balancing operation will be initiated to effectively use the resources dynamically with the process of assigning resources to the corresponding node to reduce the load value. So, assigning of resources to proper nodes is an optimal distribution problem so that many optimization algorithms such as genetic algorithm and modified genetic algorithm are utilized for load balancing. These algorithms are not much effective in providing the neighbour solutions since it does not overcome exploration and exploration problem. So, utilizing the effective optimization procedure instead of genetic algorithm can lead to better load balancing since it is a traditional and old algorithm. Accordingly, I have planned to utilize a recent optimization algorithm, called firefly algorithm to do the load balancing operation in our proposed work. At first, the index table will be maintained by considering the availability of virtual servers and sequence of request. Then, load index will be computed based on the newly derived formulae. Based on load index, load balancing operation will be carried out using firefly algorithm. The performance analysis produced expected results and thus proved the proposed approach is efficient in optimizing schedules by balancing the

  7. Basic Research on Adaptive Model Algorithmic Control

    Science.gov (United States)

    1985-12-01

    Control Conference. Richalet, J., A. Rault, J.L. Testud and J. Papon (1978). Model predictive heuristic control: applications to industrial...pp.977-982. Richalet, J., A. Rault, J. L. Testud and J. Papon (1978). Model predictive heuristic control: applications to industrial processes

  8. Immune System Model Calibration by Genetic Algorithm

    NARCIS (Netherlands)

    Presbitero, A.; Krzhizhanovskaya, V.; Mancini, E.; Brands, R.; Sloot, P.

    2016-01-01

    We aim to develop a mathematical model of the human immune system for advanced individualized healthcare where medication plan is fine-tuned to fit a patient's conditions through monitored biochemical processes. One of the challenges is calibrating model parameters to satisfy existing experimental

  9. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation a

  10. Approximation Algorithms for Model-Based Diagnosis

    NARCIS (Netherlands)

    Feldman, A.B.

    2010-01-01

    Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation a

  11. An Algorithm for Optimally Fitting a Wiener Model

    Directory of Open Access Journals (Sweden)

    Lucas P. Beverlin

    2011-01-01

    Full Text Available The purpose of this work is to present a new methodology for fitting Wiener networks to datasets with a large number of variables. Wiener networks have the ability to model a wide range of data types, and their structures can yield parameters with phenomenological meaning. There are several challenges to fitting such a model: model stiffness, the nonlinear nature of a Wiener network, possible overfitting, and the large number of parameters inherent with large input sets. This work describes a methodology to overcome these challenges by using several iterative algorithms under supervised learning and fitting subsets of the parameters at a time. This methodology is applied to Wiener networks that are used to predict blood glucose concentrations. The predictions of validation sets from models fit to four subjects using this methodology yielded a higher correlation between observed and predicted observations than other algorithms, including the Gauss-Newton and Levenberg-Marquardt algorithms.

  12. On Models of Nonlinear Evolution Paths in Adiabatic Quantum Algorithms

    Institute of Scientific and Technical Information of China (English)

    SUN Jie; LU Song-Feng; Samuel L.Braunstein

    2013-01-01

    In this paper,we study two different nonlinear interpolating paths in adiabatic evolution algorithms for solving a particular class of quantum search problems where both the initial and final Hamiltonian are one-dimensional projector Hamiltonians on the corresponding ground state.If the overlap between the initial state and final state of the quantum system is not equal to zero,both of these models can provide a constant time speedup over the usual adiabatic algorithms by increasing some another corresponding "complexity".But when the initial state has a zero overlap with the solution state in the problem,the second model leads to an infinite time complexity of the algorithm for whatever interpolating functions being applied while the first one can still provide a constant running time.However,inspired by a related reference,a variant of the first model can be constructed which also fails for the problem when the overlap is exactly equal to zero if we want to make up the "intrinsic" fault of the second model — an increase in energy.Two concrete theorems are given to serve as explanations why neither of these two models can improve the usual adiabatic evolution algorithms for the phenomenon above.These just tell us what should be noted when using certain nonlinear evolution paths in adiabatic quantum algorithms for some special kind of problems.

  13. A simple algorithm for optimization and model fitting: AGA (asexual genetic algorithm)

    CERN Document Server

    Canto, J; Martinez-Gomez, E; 10.1051/0004-6361/200911740

    2009-01-01

    Context. Mathematical optimization can be used as a computational tool to obtain the optimal solution to a given problem in a systematic and efficient way. For example, in twice-differentiable functions and problems with no constraints, the optimization consists of finding the points where the gradient of the objective function is zero and using the Hessian matrix to classify the type of each point. Sometimes, however it is impossible to compute these derivatives and other type of techniques must be employed such as the steepest descent/ascent method and more sophisticated methods such as those based on the evolutionary algorithms. Aims. We present a simple algorithm based on the idea of genetic algorithms (GA) for optimization. We refer to this algorithm as AGA (Asexual Genetic Algorithm) and apply it to two kinds of problems: the maximization of a function where classical methods fail and model fitting in astronomy. For the latter case, we minimize the chi-square function to estimate the parameters in two e...

  14. Development of Improved Algorithms and Multiscale Modeling Capability with SUNTANS

    Science.gov (United States)

    2015-09-30

    High-resolution simulations using nonhydrostatic models like SUNTANS are crucial for understanding multiscale processes that are unresolved, and...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Development of Improved Algorithms and Multiscale ... Modeling Capability with SUNTANS Oliver B. Fringer 473 Via Ortega, Room 187 Dept. of Civil and Environmental Engineering Stanford University

  15. A general model for matroids and the greedy algorithm

    NARCIS (Netherlands)

    Faigle, U.; Fujishige, Saturo

    2009-01-01

    We present a general model for set systems to be independence families with respect to set families which determine classes of proper weight functions on a ground set. Within this model, matroids arise from a natural subclass and can be characterized by the optimality of the greedy algorithm. This

  16. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well for th...

  17. Hospital Case Cost Estimates Modelling - Algorithm Comparison

    CERN Document Server

    Andru, Peter

    2008-01-01

    Ontario (Canada) Health System stakeholders support the idea and necessity of the integrated source of data that would include both clinical (e.g. diagnosis, intervention, length of stay, case mix group) and financial (e.g. cost per weighted case, cost per diem) characteristics of the Ontario healthcare system activities at the patient-specific level. At present, the actual patient-level case costs in the explicit form are not available in the financial databases for all hospitals. The goal of this research effort is to develop financial models that will assign each clinical case in the patient-specific data warehouse a dollar value, representing the cost incurred by the Ontario health care facility which treated the patient. Five mathematical models have been developed and verified using real dataset. All models can be classified into two groups based on their underlying method: 1. Models based on using relative intensity weights of the cases, and 2. Models based on using cost per diem.

  18. A modified EM algorithm for estimation in generalized mixed models.

    Science.gov (United States)

    Steele, B M

    1996-12-01

    Application of the EM algorithm for estimation in the generalized mixed model has been largely unsuccessful because the E-step cannot be determined in most instances. The E-step computes the conditional expectation of the complete data log-likelihood and when the random effect distribution is normal, this expectation remains an intractable integral. The problem can be approached by numerical or analytic approximations; however, the computational burden imposed by numerical integration methods and the absence of an accurate analytic approximation have limited the use of the EM algorithm. In this paper, Laplace's method is adapted for analytic approximation within the E-step. The proposed algorithm is computationally straightforward and retains much of the conceptual simplicity of the conventional EM algorithm, although the usual convergence properties are not guaranteed. The proposed algorithm accommodates multiple random factors and random effect distributions besides the normal, e.g., the log-gamma distribution. Parameter estimates obtained for several data sets and through simulation show that this modified EM algorithm compares favorably with other generalized mixed model methods.

  19. Image processing algorithm acceleration using reconfigurable macro processor model

    Institute of Scientific and Technical Information of China (English)

    孙广富; 陈华明; 卢焕章

    2004-01-01

    The concept and advantage of reconfigurable technology is introduced. A kind of processor architecture of reconfigurable macro processor (RMP) model based on FPGA array and DSP is put forward and has been implemented.Two image algorithms are developed: template-based automatic target recognition and zone labeling. One is estimating for motion direction in the infrared image background, another is line picking-up algorithm based on image zone labeling and phase grouping technique. It is a kind of "hardware" function that can be called by the DSP in high-level algorithm.It is also a kind of hardware algorithm of the DSP. The results of experiments show the reconfigurable computing technology based on RMP is an ideal accelerating means to deal with the high-speed image processing tasks. High real time performance is obtained in our two applications on RMP.

  20. Multi-level Algorithm for the Anderson Impurity Model

    Science.gov (United States)

    Chandrasekharan, S.; Yoo, J.; Baranger, H. U.

    2004-03-01

    We develop a new quantum Monte Carlo algorithm to solve the Anderson impurity model. Instead of integrating out the Fermions, we work in the Fermion occupation number basis and thus have direct access to the Fermionic physics. The sign problem that arises in this formulation can be solved by a multi-level technique developed by Luscher and Weisz in the context of lattice QCD [JHEP, 0109 (2001) 010]. We use the directed-loop algorithm to update the degrees of freedom. Further, this algorithm allows us to work directly in the Euclidean time continuum limit for arbitrary values of the interaction strength thus avoiding time discretization errors. We present results for the impurity susceptibility and the properties of the screening cloud obtained using the algorithm.

  1. Co-clustering models, algorithms and applications

    CERN Document Server

    Govaert, Gérard

    2013-01-01

    Cluster or co-cluster analyses are important tools in a variety of scientific areas. The introduction of this book presents a state of the art of already well-established, as well as more recent methods of co-clustering. The authors mainly deal with the two-mode partitioning under different approaches, but pay particular attention to a probabilistic approach. Chapter 1 concerns clustering in general and the model-based clustering in particular. The authors briefly review the classical clustering methods and focus on the mixture model. They present and discuss the use of different mixture

  2. Impulsive Neural Networks Algorithm Based on the Artificial Genome Model

    Directory of Open Access Journals (Sweden)

    Yuan Gao

    2014-05-01

    Full Text Available To describe gene regulatory networks, this article takes the framework of the artificial genome model and proposes impulsive neural networks algorithm based on the artificial genome model. Firstly, the gene expression and the cell division tree are applied to generate spiking neurons with specific attributes, neural network structure, connection weights and specific learning rules of each neuron. Next, the gene segment duplications and divergence model are applied to design the evolutionary algorithm of impulsive neural networks at the level of the artificial genome. The dynamic changes of developmental gene regulatory networks are controlled during the whole evolutionary process. Finally, the behavior of collecting food for autonomous intelligent agent is simulated, which is driven by nerves. Experimental results demonstrate that the algorithm in this article has the evolutionary ability on large-scale impulsive neural networks

  3. Differential Evolution algorithm applied to FSW model calibration

    Science.gov (United States)

    Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.

    2014-03-01

    Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.

  4. Software Model Checking for Verifying Distributed Algorithms

    Science.gov (United States)

    2014-10-28

    Verification procedure is an intelligent exhaustive search of the state space of the design Model Checking 6 Verifying Synchronous Distributed App...Distributed App Sagar Chaki, June 11, 2014 © 2014 Carnegie Mellon University Tool Usage Project webpage (http://mcda.googlecode.com) • Tutorial

  5. Economic Models and Algorithms for Distributed Systems

    CERN Document Server

    Neumann, Dirk; Altmann, Jorn; Rana, Omer F

    2009-01-01

    Distributed computing models for sharing resources such as Grids, Peer-to-Peer systems, or voluntary computing are becoming increasingly popular. This book intends to discover fresh avenues of research and amendments to existing technologies, aiming at the successful deployment of commercial distributed systems

  6. A tractable algorithm for the wellfounded model

    NARCIS (Netherlands)

    Jonker, C.M.; Renardel de Lavalette, G.R.

    In the area of general logic programming (negated atoms allowed in the bodies of rules) and reason maintenance systems, the wellfounded model (first defined by Van Gelder, Ross and Schlipf in 1988) is generally considered to be the declarative semantics of the program. In this paper we present

  7. Vitamin D supplementation in older people (VDOP): Study protocol for a randomised controlled intervention trial with monthly oral dosing with 12,000 IU, 24,000 IU or 48,000 IU of vitamin D3

    Science.gov (United States)

    2013-01-01

    The randomised, double blind intervention trial ‘Optimising Vitamin D Status in Older People’ (VDOP) will test the effect of three oral dosages of vitamin D given for one year on bone mineral density (BMD) and biochemical markers of vitamin D metabolism, bone turnover and safety in older people. VDOP is funded by Arthritis Research UK, supported through Newcastle University and MRC Human Nutrition Research and sponsored by the Newcastle upon Tyne Hospitals NHS Foundation Trust.a Background Vitamin D insufficiency is common in older people and may lead to secondary hyperparathyroidism, bone loss, impairment of muscle function and increased risk of falls and fractures. Vitamin D supplementation trials have yielded conflicting results with regard to decreasing rates of bone loss, falls and fractures and the optimal plasma concentration of 25 hydroxy vitamin D (25OHD) for skeletal health remains unclear. Method/design Older (≥70 years) community dwelling men and women are recruited through General Practices in Northern England and 375 participants are randomised to take 12,000 international units (IU), 24,000 IU or 48,000 IU of vitamin D3 orally each month for one year starting in the winter or early spring. Hip BMD and anthropometry are measured at baseline and 12 months. Fasting blood samples are collected at baseline and three-month intervals for the measurement of plasma 25OHD, parathyroid hormone (PTH), biochemical markers of bone turnover and biochemistry to assess the dose–response and safety of supplementation. Questionnaire data include falls, fractures, quality of life, adverse events and outcomes, compliance, dietary calcium intake and sunshine exposure. Discussion This is the first integrated vitamin D supplementation trial in older men and women using a range of doses given at monthly intervals to assess BMD, plasma 25OHD, PTH and biochemical markers of bone turnover and safety, quality of life and physical performance. We aim to investigate the

  8. Data mining concepts models methods and algorithms

    CERN Document Server

    Kantardzic, Mehmed

    2011-01-01

    This book reviews state-of-the-art methodologies and techniques for analyzing enormous quantities of raw data in high-dimensional data spaces, to extract new information for decision making. The goal of this book is to provide a single introductory source, organized in a systematic way, in which we could direct the readers in analysis of large data sets, through the explanation of basic concepts, models and methodologies developed in recent decades.

  9. Efficiency of Evolutionary Algorithms for Calibration of Watershed Models

    Science.gov (United States)

    Ahmadi, M.; Arabi, M.

    2009-12-01

    Since the promulgation of the Clean Water Act in the U.S. and other similar legislations around the world over the past three decades, watershed management programs have focused on the nexus of pollution prevention and mitigation. In this context, hydrologic/water quality models have been increasingly embedded in the decision making process. Simulation models are now commonly used to investigate the hydrologic response of watershed systems under varying climatic and land use conditions, and also to study the fate and transport of contaminants at various spatiotemporal scales. Adequate calibration and corroboration of models for various outputs at varying scales is an essential component of watershed modeling. The parameter estimation process could be challenging when multiple objectives are important. For example, improving streamflow predictions of the model at a stream location may result in degradation of model predictions for sediments and/or nutrient at the same location or other outlets. This paper aims to evaluate the applicability and efficiency of single and multi objective evolutionary algorithms for parameter estimation of complex watershed models. To this end, the Shuffled Complex Evolution (SCE-UA) algorithm, a single-objective genetic algorithm (GA), and a multi-objective genetic algorithm (i.e., NSGA-II) were reconciled with the Soil and Water Assessment Tool (SWAT) to calibrate the model at various locations within the Wildcat Creek Watershed, Indiana. The efficiency of these methods were investigated using different error statistics including root mean square error, coefficient of determination and Nash-Sutcliffe efficiency coefficient for the output variables as well as the baseflow component of the stream discharge. A sensitivity analysis was carried out to screening model parameters that bear significant uncertainties. Results indicated that while flow processes can be reasonably ascertained, parameterization of nutrient and pesticide processes

  10. A tuning algorithm for model predictive controllers based on genetic algorithms and fuzzy decision making.

    Science.gov (United States)

    van der Lee, J H; Svrcek, W Y; Young, B R

    2008-01-01

    Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases. Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used. This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly useful for multi-input, multi-output (MIMO) cases where the definition of "optimum" control is subject to the opinion of the control engineer tuning the system. A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of "optimum" control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control, thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.

  11. Genetic Algorithm Modeling with GPU Parallel Computing Technology

    CERN Document Server

    Cavuoti, Stefano; Brescia, Massimo; Pescapé, Antonio; Longo, Giuseppe; Ventre, Giorgio

    2012-01-01

    We present a multi-purpose genetic algorithm, designed and implemented with GPGPU / CUDA parallel computing technology. The model was derived from a multi-core CPU serial implementation, named GAME, already scientifically successfully tested and validated on astrophysical massive data classification problems, through a web application resource (DAMEWARE), specialized in data mining based on Machine Learning paradigms. Since genetic algorithms are inherently parallel, the GPGPU computing paradigm has provided an exploit of the internal training features of the model, permitting a strong optimization in terms of processing performances and scalability.

  12. An Extended Clustering Algorithm for Statistical Language Models

    CERN Document Server

    Ueberla, J P

    1994-01-01

    Statistical language models frequently suffer from a lack of training data. This problem can be alleviated by clustering, because it reduces the number of free parameters that need to be trained. However, clustered models have the following drawback: if there is ``enough'' data to train an unclustered model, then the clustered variant may perform worse. On currently used language modeling corpora, e.g. the Wall Street Journal corpus, how do the performances of a clustered and an unclustered model compare? While trying to address this question, we develop the following two ideas. First, to get a clustering algorithm with potentially high performance, an existing algorithm is extended to deal with higher order N-grams. Second, to make it possible to cluster large amounts of training data more efficiently, a heuristic to speed up the algorithm is presented. The resulting clustering algorithm can be used to cluster trigrams on the Wall Street Journal corpus and the language models it produces can compete with exi...

  13. Numerical algorithm of distributed TOPKAPI model and its application

    Institute of Scientific and Technical Information of China (English)

    Deng Peng; Li Zhijia; Liu Zhiyu

    2008-01-01

    The TOPKAPI (TOPographic Kinematic APproximation and Integration) model is a physically based rainfall-runoff model derived from the integration in space of the kinematic wave model. In the TOPKAPI model, rainfall-runoff and runoff routing processes are described by three nonlinear reservoir differential equations that are structurally similar and describe different hydrological and hydraulic processes. Equations are integrated over grid cells that describe the geometry of the catchment, leading to a cascade of nonlinear reservoir equations. For the sake of improving the model's computation precision, this paper provides the general form of these equations and describes the solution by means of a numerical algorithm, the variable-step fourth-order Runge-Kutta algorithm. For the purpose of assessing the quality of the comprehensive numerical algorithm, this paper presents a case study application to the Buliu River Basin, which has an area of 3 310 km2, using a DEM (digital elevation model) grid with a resolution of 1 km. The results show that the variable-step fourth-order Runge-Kutta algorithm for nonlinear reservoir equations is a good approximation of subsurface flow in the soil matrix, overland flow over the slopes, and surface flow in the channel network, allowing us to retain the physical properties of the original equations at scales ranging from a few meters to 1 km.

  14. Study on Fleet Assignment Problem Model and Algorithm

    Directory of Open Access Journals (Sweden)

    Yaohua Li

    2013-01-01

    Full Text Available The Fleet Assignment Problem (FAP of aircraft scheduling in airlines is studied, and the optimization model of FAP is proposed. The objective function of this model is revenue maximization, and it considers comprehensively the difference of scheduled flights and aircraft models in flight areas and mean passenger flows. In order to solve the model, a self-adapting genetic algorithm is supposed to solve the model, which uses natural number coding, adjusts dynamically crossover and mutation operator probability, and adopts intelligent heuristic adjusting to quicken optimization pace. The simulation with production data of an airline shows that the model and algorithms suggested in this paper are feasible and have a good application value.

  15. Financial Data Modeling by Using Asynchronous Parallel Evolutionary Algorithms

    Institute of Scientific and Technical Information of China (English)

    Wang Chun; Li Qiao-yun

    2003-01-01

    In this paper, the high-level knowledge of financial data modeled by ordinary differential equations (ODEs) is discovered in dynamic data by using an asynchronous parallel evolutionary modeling algorithm (APHEMA). A numerical example of Nasdaq index analysis is used to demonstrate the potential of APHEMA. The results show that the dynamic models automatically discovered in dynamic data by computer can be used to predict the financial trends.

  16. Methodology, models and algorithms in thermographic diagnostics

    CERN Document Server

    Živčák, Jozef; Madarász, Ladislav; Rudas, Imre J

    2013-01-01

    This book presents  the methodology and techniques of  thermographic applications with focus primarily on medical thermography implemented for parametrizing the diagnostics of the human body. The first part of the book describes the basics of infrared thermography, the possibilities of thermographic diagnostics and the physical nature of thermography. The second half includes tools of intelligent engineering applied for the solving of selected applications and projects. Thermographic diagnostics was applied to problematics of paraplegia and tetraplegia and carpal tunnel syndrome (CTS). The results of the research activities were created with the cooperation of the four projects within the Ministry of Education, Science, Research and Sport of the Slovak Republic entitled Digital control of complex systems with two degrees of freedom, Progressive methods of education in the area of control and modeling of complex object oriented systems on aircraft turbocompressor engines, Center for research of control of te...

  17. Computational modeling of red blood cells: A symplectic integration algorithm

    Science.gov (United States)

    Schiller, Ulf D.; Ladd, Anthony J. C.

    2010-03-01

    Red blood cells can undergo shape transformations that impact the rheological properties of blood. Computational models have to account for the deformability and red blood cells are often modeled as elastically deformable objects. We present a symplectic integration algorithm for deformable objects. The surface is represented by a set of marker points obtained by surface triangulation, along with a set of fiber vectors that describe the orientation of the material plane. The various elastic energies are formulated in terms of these variables and the equations of motion are obtained by exact differentiation of a discretized Hamiltonian. The integration algorithm preserves the Hamiltonian structure and leads to highly accurate energy conservation, hence he method is expected to be more stable than conventional finite element methods. We apply the algorithm to simulate the shape dynamics of red blood cells.

  18. An Efficient Cluster Algorithm for CP(N-1) Models

    CERN Document Server

    Beard, B B; Riederer, S; Wiese, U J

    2005-01-01

    We construct an efficient cluster algorithm for ferromagnetic SU(N)-symmetric quantum spin systems. Such systems provide a new regularization for CP(N-1) models in the framework of D-theory, which is an alternative non-perturbative approach to quantum field theory formulated in terms of discrete quantum variables instead of classical fields. Despite several attempts, no efficient cluster algorithm has been constructed for CP(N-1) models in the standard formulation of lattice field theory. In fact, there is even a no-go theorem that prevents the construction of an efficient Wolff-type embedding algorithm. We present various simulations for different correlation lengths, couplings and lattice sizes. We have simulated correlation lengths up to 250 lattice spacings on lattices as large as 640x640 and we detect no evidence for critical slowing down.

  19. Calibration of microscopic traffic simulation models using metaheuristic algorithms

    Directory of Open Access Journals (Sweden)

    Miao Yu

    2017-06-01

    Full Text Available This paper presents several metaheuristic algorithms to calibrate a microscopic traffic simulation model. The genetic algorithm (GA, Tabu Search (TS, and a combination of the GA and TS (i.e., warmed GA and warmed TS are implemented and compared. A set of traffic data collected from the I-5 Freeway, Los Angles, California, is used. Objective functions are defined to minimize the difference between simulated and field traffic data which are built based on the flow and speed. Several car-following parameters in VISSIM, which can significantly affect the simulation outputs, are selected to calibrate. A better match to the field measurements is reached with the GA, TS, and warmed GA and TS when comparing with that only using the default parameters in VISSIM. Overall, TS performs very well and can be used to calibrate parameters. Combining metaheuristic algorithms clearly performs better and therefore is highly recommended for calibrating microscopic traffic simulation models.

  20. Numerical algorithm of distributed TOPKAPI model and its application

    Directory of Open Access Journals (Sweden)

    Deng Peng

    2008-12-01

    Full Text Available The TOPKAPI (TOPographic Kinematic APproximation and Integration model is a physically based rainfall-runoff model derived from the integration in space of the kinematic wave model. In the TOPKAPI model, rainfall-runoff and runoff routing processes are described by three nonlinear reservoir differential equations that are structurally similar and describe different hydrological and hydraulic processes. Equations are integrated over grid cells that describe the geometry of the catchment, leading to a cascade of nonlinear reservoir equations. For the sake of improving the model’s computation precision, this paper provides the general form of these equations and describes the solution by means of a numerical algorithm, the variable-step fourth-order Runge-Kutta algorithm. For the purpose of assessing the quality of the comprehensive numerical algorithm, this paper presents a case study application to the Buliu River Basin, which has an area of 3 310 km2, using a DEM (digital elevation model grid with a resolution of 1 km. The results show that the variable-step fourth-order Runge-Kutta algorithm for nonlinear reservoir equations is a good approximation of subsurface flow in the soil matrix, overland flow over the slopes, and surface flow in the channel network, allowing us to retain the physical properties of the original equations at scales ranging from a few meters to 1 km.

  1. Epidemic Processes on Complex Networks: Modelling, Simulation and Algorithms

    NARCIS (Netherlands)

    Van de Bovenkamp, R.

    2015-01-01

    Local interactions on a graph will lead to global dynamic behaviour. In this thesis we focus on two types of dynamic processes on graphs: the Susceptible-Infected-Susceptilbe (SIS) virus spreading model, and gossip style epidemic algorithms. The largest part of this thesis is devoted to the SIS mode

  2. Worm Algorithm for CP(N-1) Model

    CERN Document Server

    Rindlisbacher, Tobias

    2017-01-01

    The CP(N-1) model in 2D is an interesting toy model for 4D QCD as it possesses confinement, asymptotic freedom and a non-trivial vacuum structure. Due to the lower dimensionality and the absence of fermions, the computational cost for simulating 2D CP(N-1) on the lattice is much lower than that for simulating 4D QCD. However, to our knowledge, no efficient algorithm for simulating the lattice CP(N-1) model has been tested so far, which also works at finite density. To this end we propose a new type of worm algorithm which is appropriate to simulate the lattice CP(N-1) model in a dual, flux-variables based representation, in which the introduction of a chemical potential does not give rise to any complications. In addition to the usual worm moves where a defect is just moved from one lattice site to the next, our algorithm additionally allows for worm-type moves in the internal variable space of single links, which accelerates the Monte Carlo evolution. We use our algorithm to compare the two popular CP(N-1) l...

  3. Worm algorithm for the CP N - 1 model

    Science.gov (United States)

    Rindlisbacher, Tobias; de Forcrand, Philippe

    2017-05-01

    The CP N - 1 model in 2D is an interesting toy model for 4D QCD as it possesses confinement, asymptotic freedom and a non-trivial vacuum structure. Due to the lower dimensionality and the absence of fermions, the computational cost for simulating 2D CP N - 1 on the lattice is much lower than that for simulating 4D QCD. However, to our knowledge, no efficient algorithm for simulating the lattice CP N - 1 model for N > 2 has been tested so far, which also works at finite density. To this end we propose a new type of worm algorithm which is appropriate to simulate the lattice CP N - 1 model in a dual, flux-variables based representation, in which the introduction of a chemical potential does not give rise to any complications. In addition to the usual worm moves where a defect is just moved from one lattice site to the next, our algorithm additionally allows for worm-type moves in the internal variable space of single links, which accelerates the Monte Carlo evolution. We use our algorithm to compare the two popular CP N - 1 lattice actions and exhibit marked differences in their approach to the continuum limit.

  4. Evolving the Topology of Hidden Markov Models using Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Thomsen, Réne

    2002-01-01

    Hidden Markov models (HMM) are widely used for speech recognition and have recently gained a lot of attention in the bioinformatics community, because of their ability to capture the information buried in biological sequences. Usually, heuristic algorithms such as Baum-Welch are used to estimate...

  5. A combined model reduction algorithm for controlled biochemical systems.

    Science.gov (United States)

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-02-13

    Systems Biology continues to produce increasingly large models of complex biochemical reaction networks. In applications requiring, for example, parameter estimation, the use of agent-based modelling approaches, or real-time simulation, this growing model complexity can present a significant hurdle. Often, however, not all portions of a model are of equal interest in a given setting. In such situations methods of model reduction offer one possible approach for addressing the issue of complexity by seeking to eliminate those portions of a pathway that can be shown to have the least effect upon the properties of interest. In this paper a model reduction algorithm bringing together the complementary aspects of proper lumping and empirical balanced truncation is presented. Additional contributions include the development of a criterion for the selection of state-variable elimination via conservation analysis and use of an 'averaged' lumping inverse. This combined algorithm is highly automatable and of particular applicability in the context of 'controlled' biochemical networks. The algorithm is demonstrated here via application to two examples; an 11 dimensional model of bacterial chemotaxis in Escherichia coli and a 99 dimensional model of extracellular regulatory kinase activation (ERK) mediated via the epidermal growth factor (EGF) and nerve growth factor (NGF) receptor pathways. In the case of the chemotaxis model the algorithm was able to reduce the model to 2 state-variables producing a maximal relative error between the dynamics of the original and reduced models of only 2.8% whilst yielding a 26 fold speed up in simulation time. For the ERK activation model the algorithm was able to reduce the system to 7 state-variables, incurring a maximal relative error of 4.8%, and producing an approximately 10 fold speed up in the rate of simulation. Indices of controllability and observability are additionally developed and demonstrated throughout the paper. These provide

  6. Study on model and algorithm of inventory routing problem

    Science.gov (United States)

    Wan, Fengjiao

    Vehicle routing problem(VRP) is one of important research in the logistics system. Nowadays, there are many researches on the VRP, but their don't consider the cost of inventory. Thus, the conclusion doesn't meet reality. This paper studies on the inventory routing problem (IRP)and uses one target function to describe these two conflicting problems, which are very important in the logistics optimization. The paper establishes the model of single client and many clients' inventory routing problem. An optimizing iterative algorithm is presented to solve the model. According to the model we can confirm the best quantity, efficiency and route of delivery. Finally, an example is given to illustrate the efficiency of model and algorithm.

  7. Model-based Bayesian signal extraction algorithm for peripheral nerves

    Science.gov (United States)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of

  8. The mathematical model realization algorithm of high voltage cable

    OpenAIRE

    2006-01-01

    At mathematical model realization algorithm is very important to know the account order of necessary relations and how it presents. Depending of loads or signal sources connection in selected points of mathematical model its very important to know as to make the equations in this point that it was possible to determine all unknown variables in this point. The number of equations which describe this point must to coincide with number of unknown variables, and matrix which describes factor...

  9. Crime Busting Model Based on Dynamic Ranking Algorithms

    Directory of Open Access Journals (Sweden)

    Yang Cao

    2013-01-01

    Full Text Available This paper proposed a crime busting model with two dynamic ranking algorithms to detect the likelihood of a suspect and the possibility of a leader in a complex social network. Signally, in order to obtain the priority list of suspects, an advanced network mining approach with a dynamic cumulative nominating algorithm is adopted to rapidly reduce computational expensiveness than most other topology-based approaches. Our method can also greatly increase the accuracy of solution with the enhancement of semantic learning filtering at the same time. Moreover, another dynamic algorithm of node contraction is also presented to help identify the leader among conspirators. Test results are given to verify the theoretical results, which show the great performance for either small or large datasets.

  10. Threat Modeling-Oriented Attack Path Evaluating Algorithm

    Institute of Scientific and Technical Information of China (English)

    LI Xiaohong; LIU Ran; FENG Zhiyong; HE Ke

    2009-01-01

    In order to evaluate all attack paths in a threat tree,based on threat modeling theory,a weight distribution algorithm of the root node in a threat tree is designed,which computes threat coefficients of leaf nodes in two ways including threat occurring possibility and the degree of damage.Besides,an algorithm of searching attack path was also obtained in accordence with its definition.Finally,an attack path evaluation system was implemented which can output the threat coefficients of the leaf nodes in a target threat tree,the weight distribution information,and the attack paths.An example threat tree is given to verify the effectiveness of the algorithms.

  11. Gray Cerebrovascular Image Skeleton Extraction Algorithm Using Level Set Model

    Directory of Open Access Journals (Sweden)

    Jian Wu

    2010-06-01

    Full Text Available The ambiguity and complexity of medical cerebrovascular image makes the skeleton gained by conventional skeleton algorithm discontinuous, which is sensitive at the weak edges, with poor robustness and too many burrs. This paper proposes a cerebrovascular image skeleton extraction algorithm based on Level Set model, using Euclidean distance field and improved gradient vector flow to obtain two different energy functions. The first energy function controls the  obtain of topological nodes for the beginning of skeleton curve. The second energy function controls the extraction of skeleton surface. This algorithm avoids the locating and classifying of the skeleton connection points which guide the skeleton extraction. Because all its parameters are gotten by the analysis and reasoning, no artificial interference is needed.

  12. Time-Based Dynamic Trust Model Using Ant Colony Algorithm

    Institute of Scientific and Technical Information of China (English)

    TANG Zhuo; LU Zhengding; LI Kai

    2006-01-01

    The trust in distributed environment is uncertain, which is variation for various factors. This paper introduces TDTM, a model for time-based dynamic trust. Every entity in the distribute environment is endowed with a trust-vector, which figures the trust intensity between this entity and the others. The trust intensity is dynamic due to the time and the inter-operation between two entities, a method is proposed to quantify this change based on the mind of ant colony algorithm and then an algorithm for the transfer of trust relation is also proposed. Furthermore, this paper analyses the influence to the trust intensity among all entities that is aroused by the change of trust intensity between the two entities, and presents an algorithm to resolve the problem. Finally, we show the process of the trusts'change that is aroused by the time' lapse and the inter-operation through an instance.

  13. A Software Pattern of the Genetic Algorithm -a Study on Reusable Object Model of Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The Genetic Algorithm (GA) has been a pop research field, butthere is little concern on GA in view of Software Engineering and this result in a serie s of problems. In this paper, we extract a GA's software pattern, draw a model d iagram of the reusable objects, analyze the advantages and disadvantages of the pattern, and give a sample code at the end. We are then able to improve the reus ability and expansibility of GA. The results make it easier to program a new GA code by using some existing successful operators, thereby reducing the difficult ies and workload of programming a GA's code, and facilitate the GA application.

  14. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  15. Improved Marquardt Algorithm for Training Neural Networks for Chemical Process Modeling

    Institute of Scientific and Technical Information of China (English)

    吴建昱; 何小荣

    2002-01-01

    Back-propagation (BP) artificial neural networks have been widely used to model chemical processes. BP networks are often trained using the generalized delta-rule (GDR) algorithm but application of such networks is limited because of the low convergent speed of the algorithm. This paper presents a new algorithm incorporating the Marquardt algorithm into the BP algorithm for training feedforward BP neural networks. The new algorithm was tested with several case studies and used to model the Reid vapor pressure (RVP) of stabilizer gasoline. The new algorithm has faster convergence and is much more efficient than the GDR algorithm.

  16. Effect of two prophylactic bolus vitamin D dosing regimens (1000 IU/day vs. 400 IU/day) on bone mineral content in new-onset and infrequently-relapsing nephrotic syndrome: a randomised clinical trial.

    Science.gov (United States)

    Muske, Sravani; Krishnamurthy, Sriram; Kamalanathan, Sadish Kumar; Rajappa, Medha; Harichandrakumar, K T; Sivamurukan, Palanisamy

    2017-05-03

    To examine the efficacy of two vitamin D dosages (1000 vs. 400 IU/day) for osteoprotection in children with new-onset and infrequently-relapsing nephrotic syndrome (IFRNS) receiving corticosteroids. This parallel-group, open label, randomised clinical trial enrolled 92 children with new-onset nephrotic syndrome (NS) (n = 28) or IFRNS (n = 64) to receive 1000 IU/day (Group A, n = 46) or 400 IU/day (Group B, n = 46) vitamin D (administered as a single bolus initial supplemental dose) by block randomisation in a 1:1 allocation ratio. In Group A, vitamin D (cholecalciferol in a Calcirol® sachet) was administered in a single stat dose of 84,000 IU on Day 1 of steroid therapy (for new-onset NS), calculated for a period of 12 weeks@1000 IU/day) and 42,000 IU on Day 1 of steroid therapy (for IFRNS, calculated for a period of 6 weeks@1000 IU/day). In Group B, vitamin D (cholecalciferol in a Calcirol® sachet) was administered as a single stat dose of 33,600 IU on Day 1 of steroid therapy (for new-onset NS, calculated for a period of 12 weeks@400 IU/day) and 16,800 IU on Day 1 of steroid therapy (for IFRNS, calculated for a period of 6 weeks@400 IU/day). The proportionate change in bone mineral content (BMC) was analysed in both groups after vitamin D supplementation. Of the 92 children enrolled, 84 (n = 42 new onset, n = 42 IFRNS) completed the study and were included in the final analysis. Baseline characteristics including initial BMC, bone mineral density, cumulative prednisolone dosage and serum 25-hydroxycholecalciferol levels were comparable in the two groups. There was a greater median proportionate change in BMC in the children who received 1000 IU/day vitamin D (3.25%, IQR -1.2 to 12.4) than in those who received 400 IU/day vitamin D (1.2%, IQR -2.5 to 3.8, p = 0.048). The difference in proportionate change in BMC was only statistically significant in the combined new-onset and IFRNS, but not for IFRNS alone. There was a greater

  17. Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Gregory H. [Univ. of California, Davis, CA (United States); Forest, Gregory [Univ. of California, Davis, CA (United States)

    2014-05-01

    We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.

  18. Underground water quality model inversion of genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    MA Ruijie; LI Xin

    2009-01-01

    The underground water quality model with non-linear inversion problem is ill-posed, and boils down to solving the minimum of nonlinear function. Genetic algorithms are adopted in a number of individuals of groups by iterative search to find the optimal solution of the problem, the encoding strings as its operational objective, and achieving the iterative calculations by the genetic operators. It is an effective method of inverse problems of groundwater, with incomparable advantages and practical significances.

  19. DR-model-based estimation algorithm for NCS

    Institute of Scientific and Technical Information of China (English)

    HUANG Si-niu; CHEN Zong-ji; WEI Chen

    2006-01-01

    A novel estimation scheme based on dead reckoning (DR) model for networked control system (NCS)is proposed in this paper.Both the detailed DR estimation algorithm and the stability analysis of the system are given.By using DR estimation of the state,the effect of communication delays is overcome.This makes a controller designed without considering delays still applicable in NCS Moreover,the scheme can effectively solve the problem of data packet loss or timeout.

  20. Modelling Agro-Met Station Observations Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Prashant Kumar

    2014-01-01

    Full Text Available The present work discusses the development of a nonlinear data-fitting technique based on genetic algorithm (GA for the prediction of routine weather parameters using observations from Agro-Met Stations (AMS. The algorithm produces the equations that best describe the temporal evolutions of daily minimum and maximum near-surface (at 2.5-meter height air temperature and relative humidity and daily averaged wind speed (at 10-meter height at selected AMS locations. These enable the forecasts of these weather parameters, which could have possible use in crop forecast models. The forecast equations developed in the present study use only the past observations of the above-mentioned parameters. This approach, unlike other prediction methods, provides explicit analytical forecast equation for each parameter. The predictions up to 3 days in advance have been validated using independent datasets, unknown to the training algorithm, with impressive results. The power of the algorithm has also been demonstrated by its superiority over persistence forecast used as a benchmark.

  1. Evaluating Multicore Algorithms on the Unified Memory Model

    Directory of Open Access Journals (Sweden)

    John E. Savage

    2009-01-01

    Full Text Available One of the challenges to achieving good performance on multicore architectures is the effective utilization of the underlying memory hierarchy. While this is an issue for single-core architectures, it is a critical problem for multicore chips. In this paper, we formulate the unified multicore model (UMM to help understand the fundamental limits on cache performance on these architectures. The UMM seamlessly handles different types of multiple-core processors with varying degrees of cache sharing at different levels. We demonstrate that our model can be used to study a variety of multicore architectures on a variety of applications. In particular, we use it to analyze an option pricing problem using the trinomial model and develop an algorithm for it that has near-optimal memory traffic between cache levels. We have implemented the algorithm on a two Quad-Core Intel Xeon 5310 1.6 GHz processors (8 cores. It achieves a peak performance of 19.5 GFLOPs, which is 38% of the theoretical peak of the multicore system. We demonstrate that our algorithm outperforms compiler-optimized and auto-parallelized code by a factor of up to 7.5.

  2. Electromagnetic Model and Image Reconstruction Algorithms Based on EIT System

    Institute of Scientific and Technical Information of China (English)

    CAO Zhang; WANG Huaxiang

    2006-01-01

    An intuitive 2 D model of circular electrical impedance tomography ( EIT) sensor with small size electrodes is established based on the theory of analytic functions.The validation of the model is proved using the result from the solution of Laplace equation.Suggestions on to electrode optimization and explanation to the ill-condition property of the sensitivity matrix are provided based on the model,which takes electrode distance into account and can be generalized to the sensor with any simple connected region through a conformal transformation.Image reconstruction algorithms based on the model are implemented to show feasibility of the model using experimental data collected from the EIT system developed in Tianjin University.In the simulation with a human chestlike configuration,electrical conductivity distributions are reconstructed using equi-potential backprojection (EBP) and Tikhonov regularization (TR) based on a conformal transformation of the model.The algorithms based on the model are suitable for online image reconstruction and the reconstructed results are good both in size and position.

  3. Experiments in Model-Checking Optimistic Replication Algorithms

    CERN Document Server

    Boucheneb, Hanifa

    2008-01-01

    This paper describes a series of model-checking experiments to verify optimistic replication algorithms based on Operational Transformation (OT) approach used for supporting collaborative edition. We formally define, using tool UPPAAL, the behavior and the main consistency requirement (i.e. convergence property) of the collaborative editing systems, as well as the abstract behavior of the environment where these systems are supposed to operate. Due to data replication and the unpredictable nature of user interactions, such systems have infinitely many states. So, we show how to exploit some features of the UPPAAL specification language to attenuate the severe state explosion problem. Two models are proposed. The first one, called concrete model, is very close to the system implementation but runs up against a severe explosion of states. The second model, called symbolic model, aims to overcome the limitation of the concrete model by delaying the effective selection and execution of editing operations until th...

  4. Motion Model Employment using interacting Motion Model Algorithm

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar

    2006-01-01

    model being correct is computed through a likelihood function for each model.  The study presented a simple technique to introduce additional models into the system using deterministic acceleration which basically defines the dynamics of the system.  Therefore, based on this value more motion models can...... be employed to increase the coverage.  Finally, the combined estimate is obtained using posteriori probabilities from different filter models.   The implemented approach provides an adaptive scheme for selecting various number of motion models.  Motion model description is important as it defines the kind...

  5. Routine Discovery of Complex Genetic Models using Genetic Algorithms.

    Science.gov (United States)

    Moore, Jason H; Hahn, Lance W; Ritchie, Marylyn D; Thornton, Tricia A; White, Bill C

    2004-02-01

    Simulation studies are useful in various disciplines for a number of reasons including the development and evaluation of new computational and statistical methods. This is particularly true in human genetics and genetic epidemiology where new analytical methods are needed for the detection and characterization of disease susceptibility genes whose effects are complex, nonlinear, and partially or solely dependent on the effects of other genes (i.e. epistasis or gene-gene interaction). Despite this need, the development of complex genetic models that can be used to simulate data is not always intuitive. In fact, only a few such models have been published. We have previously developed a genetic algorithm approach to discovering complex genetic models in which two single nucleotide polymorphisms (SNPs) influence disease risk solely through nonlinear interactions. In this paper, we extend this approach for the discovery of high-order epistasis models involving three to five SNPs. We demonstrate that the genetic algorithm is capable of routinely discovering interesting high-order epistasis models in which each SNP influences risk of disease only through interactions with the other SNPs in the model. This study opens the door for routine simulation of complex gene-gene interactions among SNPs for the development and evaluation of new statistical and computational approaches for identifying common, complex multifactorial disease susceptibility genes.

  6. Novel Hierarchical Fall Detection Algorithm Using a Multiphase Fall Model

    Science.gov (United States)

    Hsieh, Chia-Yeh; Liu, Kai-Chun; Huang, Chih-Ning; Chu, Woei-Chyn; Chan, Chia-Tai

    2017-01-01

    Falls are the primary cause of accidents for the elderly in the living environment. Reducing hazards in the living environment and performing exercises for training balance and muscles are the common strategies for fall prevention. However, falls cannot be avoided completely; fall detection provides an alarm that can decrease injuries or death caused by the lack of rescue. The automatic fall detection system has opportunities to provide real-time emergency alarms for improving the safety and quality of home healthcare services. Two common technical challenges are also tackled in order to provide a reliable fall detection algorithm, including variability and ambiguity. We propose a novel hierarchical fall detection algorithm involving threshold-based and knowledge-based approaches to detect a fall event. The threshold-based approach efficiently supports the detection and identification of fall events from continuous sensor data. A multiphase fall model is utilized, including free fall, impact, and rest phases for the knowledge-based approach, which identifies fall events and has the potential to deal with the aforementioned technical challenges of a fall detection system. Seven kinds of falls and seven types of daily activities arranged in an experiment are used to explore the performance of the proposed fall detection algorithm. The overall performances of the sensitivity, specificity, precision, and accuracy using a knowledge-based algorithm are 99.79%, 98.74%, 99.05% and 99.33%, respectively. The results show that the proposed novel hierarchical fall detection algorithm can cope with the variability and ambiguity of the technical challenges and fulfill the reliability, adaptability, and flexibility requirements of an automatic fall detection system with respect to the individual differences. PMID:28208694

  7. Adjustment Criterion and Algorithm in Adjustment Model with Uncertain

    Directory of Open Access Journals (Sweden)

    SONG Yingchun

    2015-02-01

    Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.

  8. Linguistically motivated statistical machine translation models and algorithms

    CERN Document Server

    Xiong, Deyi

    2015-01-01

    This book provides a wide variety of algorithms and models to integrate linguistic knowledge into Statistical Machine Translation (SMT). It helps advance conventional SMT to linguistically motivated SMT by enhancing the following three essential components: translation, reordering and bracketing models. It also serves the purpose of promoting the in-depth study of the impacts of linguistic knowledge on machine translation. Finally it provides a systematic introduction of Bracketing Transduction Grammar (BTG) based SMT, one of the state-of-the-art SMT formalisms, as well as a case study of linguistically motivated SMT on a BTG-based platform.

  9. Comparison of evolutionary algorithms in gene regulatory network model inference.

    LENUS (Irish Health Repository)

    2010-01-01

    ABSTRACT: BACKGROUND: The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. RESULTS: This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. CONCLUSIONS: Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.

  10. An adaptive correspondence algorithm for modeling scenes with strong interreflections.

    Science.gov (United States)

    Xu, Yi; Aliaga, Daniel G

    2009-01-01

    Modeling real-world scenes, beyond diffuse objects, plays an important role in computer graphics, virtual reality, and other commercial applications. One active approach is projecting binary patterns in order to obtain correspondence and reconstruct a densely sampled 3D model. In such structured-light systems, determining whether a pixel is directly illuminated by the projector is essential to decoding the patterns. When a scene has abundant indirect light, this process is especially difficult. In this paper, we present a robust pixel classification algorithm for this purpose. Our method correctly establishes the lower and upper bounds of the possible intensity values of an illuminated pixel and of a non-illuminated pixel. Based on the two intervals, our method classifies a pixel by determining whether its intensity is within one interval but not in the other. Our method performs better than standard method due to the fact that it avoids gross errors during decoding process caused by strong inter-reflections. For the remaining uncertain pixels, we apply an iterative algorithm to reduce the inter-reflection within the scene. Thus, more points can be decoded and reconstructed after each iteration. Moreover, the iterative algorithm is carried out in an adaptive fashion for fast convergence.

  11. Hierarchical Stochastic Simulation Algorithm for SBML Models of Genetic Circuits

    Directory of Open Access Journals (Sweden)

    Leandro eWatanabe

    2014-11-01

    Full Text Available This paper describes a hierarchical stochastic simulation algorithm which has been implemented within iBioSim, a tool used to model, analyze, and visualize genetic circuits. Many biological analysis tools flatten out hierarchy before simulation, but there are many disadvantages associated with this approach. First, the memory required to represent the model can quickly expand in the process. Second, the flattening process is computationally expensive. Finally, when modeling a dynamic cellular population within iBioSim, inlining the hierarchy of the model is inefficient since models must grow dynamically over time. This paper discusses a new approach to handle hierarchy on the fly to make the tool faster and more memory-efficient. This approach yields significant performance improvements as compared to the former flat analysis method.

  12. The Distance Field Model and Distance Constrained MAP Adaptation Algorithm

    Institute of Scientific and Technical Information of China (English)

    YUPeng; WANGZuoying

    2003-01-01

    Spatial structure information, i.e., the rel-ative position information of phonetic states in the feature space, is long to be carefully researched yet. In this pa-per, a new model named “Distance Field” is proposed to describe the spatial structure information. Based on this model, a modified MAP adaptation algorithm named dis-tance constrained maximum a poateriori (DCMAP) is in-troduced. The distance field model gives large penalty when the spatial structure is destroyed. As a result the DCMAP reserves the spatial structure information in adaptation process. Experiments show the Distance Field Model improves the performance of MAP adapta-tion. Further results show DCMAP has strong cross-state estimation ability, which is used to train a well-performed speaker-dependent model by data from only part of pho-

  13. Multiobjective Route Planning Model and Algorithm for Emergency Management

    Directory of Open Access Journals (Sweden)

    Wen-mei Gai

    2015-01-01

    Full Text Available In order to model route planning problem for emergency logistics management taking both route timeliness and safety into account, a multiobjective mathematical model is proposed based on the theories of bounded rationality. The route safety is modeled as the product of safety through arcs included in the path. For solving this model, we convert the multiobjective optimization problem into its equivalent deterministic form. We take uncertainty of the weight coefficient for each objective function in actual multiobjective optimization into account. Finally, we develop an easy-to-implement heuristic in order to gain an efficient and feasible solution and its corresponding appropriate vector of weight coefficients quickly. Simulation results show the effectiveness and feasibility of the models and algorithms presented in this paper.

  14. IIR Filter Modeling Using an Algorithm Inspired on Electromagnetism

    Directory of Open Access Journals (Sweden)

    Cuevas-Jiménez E.

    2013-01-01

    Full Text Available Infinite-impulse-response (IIR filtering provides a powerful approach for solving a variety of problems. However, its design represents a very complicated task, since the error surface of IIR filters is generally multimodal, global optimization techniques are required in order to avoid local minima. In this paper, a new method based on the Electromagnetism-Like Optimization Algorithm (EMO is proposed for IIR filter modeling. EMO originates from the electro-magnetism theory of physics by assuming potential solutions as electrically charged particles which spread around the solution space. The charge of each particle depends on its objective function value. This algorithm employs a collective attraction-repulsion mechanism to move the particles towards optimality. The experimental results confirm the high performance of the proposed method in solving various benchmark identification problems.

  15. High speed railway track dynamics models, algorithms and applications

    CERN Document Server

    Lei, Xiaoyan

    2017-01-01

    This book systematically summarizes the latest research findings on high-speed railway track dynamics, made by the author and his research team over the past decade. It explores cutting-edge issues concerning the basic theory of high-speed railways, covering the dynamic theories, models, algorithms and engineering applications of the high-speed train and track coupling system. Presenting original concepts, systematic theories and advanced algorithms, the book places great emphasis on the precision and completeness of its content. The chapters are interrelated yet largely self-contained, allowing readers to either read through the book as a whole or focus on specific topics. It also combines theories with practice to effectively introduce readers to the latest research findings and developments in high-speed railway track dynamics. It offers a valuable resource for researchers, postgraduates and engineers in the fields of civil engineering, transportation, highway & railway engineering.

  16. [A new algorithm for NIR modeling based on manifold learning].

    Science.gov (United States)

    Hong, Ming-Jian; Wen, Zhi-Yu; Zhang, Xiao-Hong; Wen, Quan

    2009-07-01

    Manifold learning is a new kind of algorithm originating from the field of machine learning to find the intrinsic dimensionality of numerous and complex data and to extract most important information from the raw data to develop a regression or classification model. The basic assumption of the manifold learning is that the high-dimensional data measured from the same object using some devices must reside on a manifold with much lower dimensions determined by a few properties of the object. While NIR spectra are characterized by their high dimensions and complicated band assignment, the authors may assume that the NIR spectra of the same kind of substances with different chemical concentrations should reside on a manifold with much lower dimensions determined by the concentrations, according to the above assumption. As one of the best known algorithms of manifold learning, locally linear embedding (LLE) further assumes that the underlying manifold is locally linear. So, every data point in the manifold should be a linear combination of its neighbors. Based on the above assumptions, the present paper proposes a new algorithm named least square locally weighted regression (LS-LWR), which is a kind of LWR with weights determined by the least squares instead of a predefined function. Then, the NIR spectra of glucose solutions with various concentrations are measured using a NIR spectrometer and LS-LWR is verified by predicting the concentrations of glucose solutions quantitatively. Compared with the existing algorithms such as principal component regression (PCR) and partial least squares regression (PLSR), the LS-LWR has better predictability measured by the standard error of prediction (SEP) and generates an elegant model with good stability and efficiency.

  17. Model reduction using the genetic algorithm and routh approximations

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A new method of model reduction combining the genetic algorithm(GA) with the Routh approximation method is presented. It is suggested that a high-order system can be approximated by a low-order model with a time delay. The denominator parameters of the reduced-order model are determined by the Routh approximation method, then the numerator parameters and time delay are identified by the GA. The reduced-order models obtained by the proposed method will always be stable if the original system is stable and produce a good approximation to the original system in both the frequency domain and time domain. Two numerical examples show that the method is computationally simple and efficient.

  18. Exploration Of Deep Learning Algorithms Using Openacc Parallel Programming Model

    KAUST Repository

    Hamam, Alwaleed A.

    2017-03-13

    Deep learning is based on a set of algorithms that attempt to model high level abstractions in data. Specifically, RBM is a deep learning algorithm that used in the project to increase it\\'s time performance using some efficient parallel implementation by OpenACC tool with best possible optimizations on RBM to harness the massively parallel power of NVIDIA GPUs. GPUs development in the last few years has contributed to growing the concept of deep learning. OpenACC is a directive based ap-proach for computing where directives provide compiler hints to accelerate code. The traditional Restricted Boltzmann Ma-chine is a stochastic neural network that essentially perform a binary version of factor analysis. RBM is a useful neural net-work basis for larger modern deep learning model, such as Deep Belief Network. RBM parameters are estimated using an efficient training method that called Contrastive Divergence. Parallel implementation of RBM is available using different models such as OpenMP, and CUDA. But this project has been the first attempt to apply OpenACC model on RBM.

  19. A hybrid multiview stereo algorithm for modeling urban scenes.

    Science.gov (United States)

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu; Vu, Hoang-Hiep

    2013-01-01

    We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). We adopt a two-step strategy consisting first in segmenting the initial meshbased surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.

  20. A nonlinear regression model-based predictive control algorithm.

    Science.gov (United States)

    Dubay, R; Abu-Ayyad, M; Hernandez, J M

    2009-04-01

    This paper presents a unique approach for designing a nonlinear regression model-based predictive controller (NRPC) for single-input-single-output (SISO) and multi-input-multi-output (MIMO) processes that are common in industrial applications. The innovation of this strategy is that the controller structure allows nonlinear open-loop modeling to be conducted while closed-loop control is executed every sampling instant. Consequently, the system matrix is regenerated every sampling instant using a continuous function providing a more accurate prediction of the plant. Computer simulations are carried out on nonlinear plants, demonstrating that the new approach is easily implemented and provides tight control. Also, the proposed algorithm is implemented on two real time SISO applications; a DC motor, a plastic injection molding machine and a nonlinear MIMO thermal system comprising three temperature zones to be controlled with interacting effects. The experimental closed-loop responses of the proposed algorithm were compared to a multi-model dynamic matrix controller (MPC) with improved results for various set point trajectories. Good disturbance rejection was attained, resulting in improved tracking of multi-set point profiles in comparison to multi-model MPC.

  1. A Building Model Framework for a Genetic Algorithm Multi-objective Model Predictive Control

    DEFF Research Database (Denmark)

    Arendt, Krzysztof; Ionesi, Ana; Jradi, Muhyiddine

    2016-01-01

    implemented only in few buildings. The following difficulties hinder the widespread usage of MPC: (1) significant model development time, (2) limited portability of models, (3) model computational demand. In the present study a new model development framework for an MPC system based on a Genetic Algorithm (GA...

  2. Modeling the Swift BAT Trigger Algorithm with Machine Learning

    Science.gov (United States)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2015-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. (2014) is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of approximately greater than 97% (approximately less than 3% error), which is a significant improvement on a cut in GRB flux which has an accuracy of 89:6% (10:4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of eta(sub 0) approximately 0.48(+0.41/-0.23) Gpc(exp -3) yr(exp -1) with power-law indices of eta(sub 1) approximately 1.7(+0.6/-0.5) and eta(sub 2) approximately -5.9(+5.7/-0.1) for GRBs above and below a break point of z(sub 1) approximately 6.8(+2.8/-3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online.

  3. Modeling the Swift Bat Trigger Algorithm with Machine Learning

    Science.gov (United States)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2016-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift / BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of greater than or equal to 97 percent (less than or equal to 3 percent error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6 percent (10.4 percent error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of n (sub 0) approaching 0.48 (sup plus 0.41) (sub minus 0.23) per cubic gigaparsecs per year with power-law indices of n (sub 1) approaching 1.7 (sup plus 0.6) (sub minus 0.5) and n (sub 2) approaching minus 5.9 (sup plus 5.7) (sub minus 0.1) for GRBs above and below a break point of z (redshift) (sub 1) approaching 6.8 (sup plus 2.8) (sub minus 3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.

  4. Stochastic geometry, spatial statistics and random fields models and algorithms

    CERN Document Server

    2015-01-01

    Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.

  5. Space resection model calculation based on Random Sample Consensus algorithm

    Science.gov (United States)

    Liu, Xinzhu; Kang, Zhizhong

    2016-03-01

    Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.

  6. Load-balancing algorithms for the parallel community climate model

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.T.; Toonen, B.R.

    1995-01-01

    Implementations of climate models on scalable parallel computer systems can suffer from load imbalances resulting from temporal and spatial variations in the amount of computation required for physical parameterizations such as solar radiation and convective adjustment. We have developed specialized techniques for correcting such imbalances. These techniques are incorporated in a general-purpose, programmable load-balancing library that allows the mapping of computation to processors to be specified as a series of maps generated by a programmer-supplied load-balancing module. The communication required to move from one map to another is performed automatically by the library, without programmer intervention. In this paper, we describe the load-balancing problem and the techniques that we have developed to solve it. We also describe specific load-balancing algorithms that we have developed for PCCM2, a scalable parallel implementation of the Community Climate Model, and present experimental results that demonstrate the effectiveness of these algorithms on parallel computers. The load-balancing library developed in this work is available for use in other climate models.

  7. Gravitational Lens Modeling with Genetic Algorithms and Particle Swarm Optimizers

    CERN Document Server

    Rogers, Adam

    2011-01-01

    Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automa...

  8. Ant Colony Optimization Algorithm for Continuous Domains Based on Position Distribution Model of Ant Colony Foraging

    OpenAIRE

    Liqiang Liu; Yuntao Dai; Jinyu Gao

    2014-01-01

    Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules...

  9. An Improved Technique Based on Firefly Algorithm to Estimate the Parameters of the Photovoltaic Model

    Directory of Open Access Journals (Sweden)

    Issa Ahmed Abed

    2016-12-01

    Full Text Available This paper present a method to enhance the firefly algorithm by coupling with a local search. The constructed technique is applied to identify the solar parameters model where the method has been proved its ability to obtain the photovoltaic parameters model. Standard firefly algorithm (FA, electromagnetism-like (EM algorithm, and electromagnetism-like without local (EMW search algorithm all are compared with the suggested method to test its capability to solve this model.

  10. Model-checking mean-field models: algorithms & applications

    NARCIS (Netherlands)

    Kolesnichenko, Anna Victorovna

    2014-01-01

    Large systems of interacting objects are highly prevalent in today's world. In this thesis we primarily address such large systems in computer science. We model such large systems using mean-field approximation, which allows to compute the limiting behaviour of an infinite population of identical o

  11. Wolff algorithm and anisotropic continuous-spin models: An application to the spin-van der Waals model

    Science.gov (United States)

    D'onorio de Meo, Marco; Oh, Suhk Kun

    1992-07-01

    The problem of applying Wolff's cluster algorithm to anisotropic classical spin models is resolved by modifying a part of the Wolff algorithm. To test the effectiveness of our modified algorithm, the spin-van der Waals model is investigated in detail. Our estimate of the dynamical exponent of the model is z=0.19+/-0.04.

  12. Epidemic Modelling by Ripple-Spreading Network and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Jian-Qin Liao

    2013-01-01

    Full Text Available Mathematical analysis and modelling is central to infectious disease epidemiology. This paper, inspired by the natural ripple-spreading phenomenon, proposes a novel ripple-spreading network model for the study of infectious disease transmission. The new epidemic model naturally has good potential for capturing many spatial and temporal features observed in the outbreak of plagues. In particular, using a stochastic ripple-spreading process simulates the effect of random contacts and movements of individuals on the probability of infection well, which is usually a challenging issue in epidemic modeling. Some ripple-spreading related parameters such as threshold and amplifying factor of nodes are ideal to describe the importance of individuals’ physical fitness and immunity. The new model is rich in parameters to incorporate many real factors such as public health service and policies, and it is highly flexible to modifications. A genetic algorithm is used to tune the parameters of the model by referring to historic data of an epidemic. The well-tuned model can then be used for analyzing and forecasting purposes. The effectiveness of the proposed method is illustrated by simulation results.

  13. A MATLAB GUI based algorithm for modelling Magnetotelluric data

    Science.gov (United States)

    Timur, Emre; Onsen, Funda

    2016-04-01

    The magnetotelluric method is an electromagnetic survey technique that images the electrical resistivity distribution of layers in subsurface depths. Magnetotelluric method measures simultaneously total electromagnetic field components such as both time-varying magnetic field B(t) and induced electric field E(t). At the same time, forward modeling of magnetotelluric method is so beneficial for survey planning purpose, for comprehending the method, especially for students, and as part of an iteration process in inverting measured data. The MTINV program can be used to model and to interpret geophysical electromagnetic (EM) magnetotelluric (MT) measurements using a horizontally layered earth model. This program uses either the apparent resistivity and phase components of the MT data together or the apparent resistivity data alone. Parameter optimization, which is based on linearized inversion method, can be utilized in 1D interpretations. In this study, a new MATLAB GUI based algorithm has been written for the 1D-forward modeling of magnetotelluric response function for multiple layers to use in educational studies. The code also includes an automatic Gaussian noise option for a demanded ratio value. Numerous applications were carried out and presented for 2,3 and 4 layer models and obtained theoretical data were interpreted using MTINV, in order to evaluate the initial parameters and effect of noise. Keywords: Education, Forward Modelling, Inverse Modelling, Magnetotelluric

  14. "Updates to Model Algorithms & Inputs for the Biogenic ...

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.

  15. An improved fiber tracking algorithm based on fiber assignment using the continuous tracking algorithm and two-tensor model

    Institute of Scientific and Technical Information of China (English)

    Liuhong Zhu; Gang Guo

    2012-01-01

    This study tested an improved fiber tracking algorithm, which was based on fiber assignment using a continuous tracking algorithm and a two-tensor model. Different models and tracking decisions were used by judging the type of estimation of each voxel. This method should solve the cross-track problem. This study included eight healthy subjects, two axonal injury patients and seven demyelinating disease patients. This new algorithm clearly exhibited a difference in nerve fiber direction between axonal injury and demyelinating disease patients and healthy control subjects. Compared with fiber assignment with a continuous tracking algorithm, our novel method can track more and longer nerve fibers, and also can solve the fiber crossing problem.

  16. Integer programming model for optimizing bus timetable using genetic algorithm

    Science.gov (United States)

    Wihartiko, F. D.; Buono, A.; Silalahi, B. P.

    2017-01-01

    Bus timetable gave an information for passengers to ensure the availability of bus services. Timetable optimal condition happened when bus trips frequency could adapt and suit with passenger demand. In the peak time, the number of bus trips would be larger than the off-peak time. If the number of bus trips were more frequent than the optimal condition, it would make a high operating cost for bus operator. Conversely, if the number of trip was less than optimal condition, it would make a bad quality service for passengers. In this paper, the bus timetabling problem would be solved by integer programming model with modified genetic algorithm. Modification was placed in the chromosomes design, initial population recovery technique, chromosomes reconstruction and chromosomes extermination on specific generation. The result of this model gave the optimal solution with accuracy 99.1%.

  17. An Intelligent Model for Pairs Trading Using Genetic Algorithms.

    Science.gov (United States)

    Huang, Chien-Feng; Hsu, Chi-Jen; Chen, Chi-Chung; Chang, Bao Rong; Li, Chen-An

    2015-01-01

    Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice.

  18. An Algorithm for Solution of an Interval Valued EOQ Model

    Directory of Open Access Journals (Sweden)

    Susovan CHAKRABORTTY

    2013-01-01

    Full Text Available This paper deals with the problem of determining the economic order quantity (EOQin the interval sense. A purchasing inventory model with shortages and lead time, whose carryingcost, shortage cost, setup cost, demand quantity and lead time are considered as interval numbers,instead of real numbers. First, a brief survey of the existing works on comparing and ranking anytwo interval numbers on the real line is presented. A common algorithm for the optimum productionquantity (Economic lot-size per cycle of a single product (so as to minimize the total average cost isdeveloped which works well on interval number optimization under consideration. A numerical exampleis presented for better understanding the solution procedure. Finally a sensitive analysis of the optimalsolution with respect to the parameters of the model is examined.

  19. An Intelligent Model for Pairs Trading Using Genetic Algorithms

    Science.gov (United States)

    Hsu, Chi-Jen; Chen, Chi-Chung; Li, Chen-An

    2015-01-01

    Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice. PMID:26339236

  20. NONSMOOTH MODEL FOR PLASTIC LIMIT ANALYSIS AND ITS SMOOTHING ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    LI Jian-yu; PAN Shao-hua; LI Xing-si

    2006-01-01

    By means of Lagrange duality theory of the convex program, a dual problem of Hill's maximum plastic work principle under Mises' yield condition has been derived and whereby a non-differentiable convex optimization model for the limit analysis is developed. With this model, it is not necessary to linearize the yield condition and its discrete form becomes a minimization problem of the sum of Euclidean norms subject to linear constraints. Aimed at resolving the non-differentiability of Euclidean norms, a smoothing algorithm for the limit analysis of perfect-plastic continuum media is proposed.Its efficiency is demonstrated by computing the limit load factor and the collapse state for some plane stress and plain strain problems.

  1. Identification of Hammerstein Model Based on Quantum Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Hai Li

    2013-07-01

    Full Text Available Nonlinear system identification is a main topic of modern identification. A new method for nonlinear system identification is presented by using Quantum Genetic Algorithm(QGA.The problems of nonlinear system identification are cast as function optimization overprameter space,and the Quantum Genetic Algorithm is adopted to solve the optimization problem. Simulation experiments show that: compared with the genetic algorithm, quantum genetic algorithm is an effective swarm intelligence algorithm, its salient features of the algorithm parameters, small population size, and the use of Quantum gate update populations, greatly improving the recognition in the optimization of speed and accuracy. Simulation results show the effectiveness of the proposed method.

  2. Performance of a distributed DCA algorithm under inhomogeneous traffic modelled from an operational GSM network

    NARCIS (Netherlands)

    Kennedy, K.D.; Vries, E.T. de; Koorevaar, P.

    1998-01-01

    This paper presents results obtained from two different Dynamic Channel Allocation (DCA) algorithms, namely the Timid and Persistent Polite Aggressive (PPA) algorithms, simulated under both static homogeneous and dynamic inhomogeneous traffic. The dynamic inhomogeneous traffic is modelled upon real

  3. A dynamic model reduction algorithm for atmospheric chemistry models

    Science.gov (United States)

    Santillana, Mauricio; Le Sager, Philippe; Jacob, Daniel J.; Brenner, Michael

    2010-05-01

    Understanding the dynamics of the chemical composition of our atmosphere is essential to address a wide range of environmental issues from air quality to climate change. Current models solve a very large and stiff system of nonlinear advection-reaction coupled partial differential equations in order to calculate the time evolution of the concentration of over a hundred chemical species. The numerical solution of this system of equations is difficult and the development of efficient and accurate techniques to achieve this has inspired research for the past four decades. In this work, we propose an adaptive method that dynamically adjusts the chemical mechanism to be solved to the local environment and we show that the use of our approach leads to accurate results and considerable computational savings. Our strategy consists of partitioning the computational domain in active and inactive regions for each chemical species at every time step. In a given grid-box, the concentration of active species is calculated using an accurate numerical scheme, whereas the concentration of inactive species is calculated using a simple and computationally inexpensive formula. We demonstrate the performance of the method by application to the GEOS-Chem global chemical transport model.

  4. Iterative learning control algorithm for spiking behavior of neuron model

    Science.gov (United States)

    Li, Shunan; Li, Donghui; Wang, Jiang; Yu, Haitao

    2016-11-01

    Controlling neurons to generate a desired or normal spiking behavior is the fundamental building block of the treatment of many neurologic diseases. The objective of this work is to develop a novel control method-closed-loop proportional integral (PI)-type iterative learning control (ILC) algorithm to control the spiking behavior in model neurons. In order to verify the feasibility and effectiveness of the proposed method, two single-compartment standard models of different neuronal excitability are specifically considered: Hodgkin-Huxley (HH) model for class 1 neural excitability and Morris-Lecar (ML) model for class 2 neural excitability. ILC has remarkable advantages for the repetitive processes in nature. To further highlight the superiority of the proposed method, the performances of the iterative learning controller are compared to those of classical PI controller. Either in the classical PI control or in the PI control combined with ILC, appropriate background noises are added in neuron models to approach the problem under more realistic biophysical conditions. Simulation results show that the controller performances are more favorable when ILC is considered, no matter which neuronal excitability the neuron belongs to and no matter what kind of firing pattern the desired trajectory belongs to. The error between real and desired output is much smaller under ILC control signal, which suggests ILC of neuron’s spiking behavior is more accurate.

  5. The Integration of Cooperation Model and Genetic Algorithm

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In the photogrammetry,some researchers have applied genetic algorithms in aerial image texture classification and reducing hyper-spectrum remote sensing data.Genetic algorithm can rapidly find the solutions which are close to the optimal solution.But it is not easy to find the optimal solution.In order to solve the problem,a cooperative evolution idea integrating genetic algorithm and ant colony algorithm is presented in this paper.On the basis of the advantages of ant colony algorithm,this paper proposes the method integrating genetic algorithms and ant colony algorithm to overcome the drawback of genetic algorithms.Moreover,the paper takes designing texture classification masks of aerial images as an example to illustrate the integration theory and procedures.

  6. Efficient decoding algorithms for generalized hidden Markov model gene finders

    Directory of Open Access Journals (Sweden)

    Delcher Arthur L

    2005-01-01

    Full Text Available Abstract Background The Generalized Hidden Markov Model (GHMM has proven a useful framework for the task of computational gene prediction in eukaryotic genomes, due to its flexibility and probabilistic underpinnings. As the focus of the gene finding community shifts toward the use of homology information to improve prediction accuracy, extensions to the basic GHMM model are being explored as possible ways to integrate this homology information into the prediction process. Particularly prominent among these extensions are those techniques which call for the simultaneous prediction of genes in two or more genomes at once, thereby increasing significantly the computational cost of prediction and highlighting the importance of speed and memory efficiency in the implementation of the underlying GHMM algorithms. Unfortunately, the task of implementing an efficient GHMM-based gene finder is already a nontrivial one, and it can be expected that this task will only grow more onerous as our models increase in complexity. Results As a first step toward addressing the implementation challenges of these next-generation systems, we describe in detail two software architectures for GHMM-based gene finders, one comprising the common array-based approach, and the other a highly optimized algorithm which requires significantly less memory while achieving virtually identical speed. We then show how both of these architectures can be accelerated by a factor of two by optimizing their content sensors. We finish with a brief illustration of the impact these optimizations have had on the feasibility of our new homology-based gene finder, TWAIN. Conclusions In describing a number of optimizations for GHMM-based gene finders and making available two complete open-source software systems embodying these methods, it is our hope that others will be more enabled to explore promising extensions to the GHMM framework, thereby improving the state-of-the-art in gene prediction

  7. Identification of Hammerstein Model Based on Quantum Genetic Algorithm

    OpenAIRE

    Zhang Hai Li

    2013-01-01

    Nonlinear system identification is a main topic of modern identification. A new method for nonlinear system identification is presented by using Quantum Genetic Algorithm(QGA).The problems of nonlinear system identification are cast as function optimization overprameter space,and the Quantum Genetic Algorithm is adopted to solve the optimization problem. Simulation experiments show that: compared with the genetic algorithm, quantum genetic algorithm is an effective swarm intelligence algorith...

  8. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    Science.gov (United States)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  9. Modeling of genetic algorithms with a finite population

    NARCIS (Netherlands)

    Kemenade, C.H.M. van

    1997-01-01

    Cross-competition between non-overlapping building blocks can strongly influence the performance of evolutionary algorithms. The choice of the selection scheme can have a strong influence on the performance of a genetic algorithm. This paper describes a number of different genetic algorithms, all in

  10. Elastic-plastic model identification for rock surrounding an underground excavation based on immunized genetic algorithm.

    Science.gov (United States)

    Gao, Wei; Chen, Dongliang; Wang, Xu

    2016-01-01

    To compute the stability of underground engineering, a constitutive model of surrounding rock must be identified. Many constitutive models for rock mass have been proposed. In this model identification study, a generalized constitutive law for an elastic-plastic constitutive model is applied. Using the generalized constitutive law, the problem of model identification is transformed to a problem of parameter identification, which is a typical and complicated optimization. To improve the efficiency of the traditional optimization method, an immunized genetic algorithm that is proposed by the author is applied in this study. In this new algorithm, the principle of artificial immune algorithm is combined with the genetic algorithm. Therefore, the entire computation efficiency of model identification will be improved. Using this new model identification method, a numerical example and an engineering example are used to verify the computing ability of the algorithm. The results show that this new model identification algorithm can significantly improve the computation efficiency and the computation effect.

  11. Ripple-Spreading Network Model Optimization by Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiao-Bing Hu

    2013-01-01

    Full Text Available Small-world and scale-free properties are widely acknowledged in many real-world complex network systems, and many network models have been developed to capture these network properties. The ripple-spreading network model (RSNM is a newly reported complex network model, which is inspired by the natural ripple-spreading phenomenon on clam water surface. The RSNM exhibits good potential for describing both spatial and temporal features in the development of many real-world networks where the influence of a few local events spreads out through nodes and then largely determines the final network topology. However, the relationships between ripple-spreading related parameters (RSRPs of RSNM and small-world and scale-free topologies are not as obvious or straightforward as in many other network models. This paper attempts to apply genetic algorithm (GA to tune the values of RSRPs, so that the RSNM may generate these two most important network topologies. The study demonstrates that, once RSRPs are properly tuned by GA, the RSNM is capable of generating both network topologies and therefore has a great flexibility to study many real-world complex network systems.

  12. Toward Developing Genetic Algorithms to Aid in Critical Infrastructure Modeling

    Energy Technology Data Exchange (ETDEWEB)

    2007-05-01

    Today’s society relies upon an array of complex national and international infrastructure networks such as transportation, telecommunication, financial and energy. Understanding these interdependencies is necessary in order to protect our critical infrastructure. The Critical Infrastructure Modeling System, CIMS©, examines the interrelationships between infrastructure networks. CIMS© development is sponsored by the National Security Division at the Idaho National Laboratory (INL) in its ongoing mission for providing critical infrastructure protection and preparedness. A genetic algorithm (GA) is an optimization technique based on Darwin’s theory of evolution. A GA can be coupled with CIMS© to search for optimum ways to protect infrastructure assets. This includes identifying optimum assets to enforce or protect, testing the addition of or change to infrastructure before implementation, or finding the optimum response to an emergency for response planning. This paper describes the addition of a GA to infrastructure modeling for infrastructure planning. It first introduces the CIMS© infrastructure modeling software used as the modeling engine to support the GA. Next, the GA techniques and parameters are defined. Then a test scenario illustrates the integration with CIMS© and the preliminary results.

  13. The Loop-Cluster Algorithm for the Case of the 6 Vertex Model

    CERN Document Server

    Evertz, H G

    1993-01-01

    We present the loop algorithm, a new type of cluster algorithm that we recently introduced for the F model. Using the framework of Kandel and Domany, we show how to GENERALIZE the algorithm to the arrow flip symmetric 6 vertex model. We propose the principle of least possible freezing as the guide to choosing the values of free parameters in the algorithm. Finally, we briefly discuss the application of our algorithm to simulations of quantum spin systems. In particular, all necessary information is provided for the simulation of spin $\\half$ Heisenberg and $xxz$ models.

  14. Calibration of Uncertainty Analysis of the SWAT Model Using Genetic Algorithms and Bayesian Model Averaging

    Science.gov (United States)

    In this paper, the Genetic Algorithms (GA) and Bayesian model averaging (BMA) were combined to simultaneously conduct calibration and uncertainty analysis for the Soil and Water Assessment Tool (SWAT). In this hybrid method, several SWAT models with different structures are first selected; next GA i...

  15. An efficient algorithm for corona simulation with complex chemical models

    Science.gov (United States)

    Villa, Andrea; Barbieri, Luca; Gondola, Marco; Leon-Garzon, Andres R.; Malgesini, Roberto

    2017-05-01

    The simulation of cold plasma discharges is a leading field of applied sciences with many applications ranging from pollutant control to surface treatment. Many of these applications call for the development of novel numerical techniques to implement fully three-dimensional corona solvers that can utilize complex and physically detailed chemical databases. This is a challenging task since it multiplies the difficulties inherent to a three-dimensional approach by the complexity of databases comprising tens of chemical species and hundreds of reactions. In this paper a novel approach, capable of reducing significantly the computational burden, is developed. The proposed method is based on a proper time stepping algorithm capable of decomposing the original problem into simpler ones: each of them has then been tackled with either finite element, finite volume or ordinary differential equations solvers. This last solver deals with the chemical model and its efficient implementation is one of the main contributions of this work.

  16. Model Versions and Fast Algorithms for Network Epidemiology

    Institute of Scientific and Technical Information of China (English)

    Petter Holme

    2014-01-01

    Network epidemiology has become a core framework for investigating the role of human contact patterns in the spreading of infectious diseases. In network epidemiology, one represents the contact structure as a network of nodes (individuals) connected by links (sometimes as a temporal network where the links are not continuously active) and the disease as a compartmental model (where individuals are assigned states with respect to the disease and follow certain transition rules between the states). In this paper, we discuss fast algorithms for such simulations and also compare two commonly used versions,one where there is a constant recovery rate (the number of individuals that stop being infectious per time is proportional to the number of such people);the other where the duration of the disease is constant. The results show that, for most practical purposes, these versions are qualitatively the same.

  17. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    Directory of Open Access Journals (Sweden)

    Gonglin Yuan

    Full Text Available Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1 βk ≥ 0 2 the search direction has the trust region property without the use of any line search method 3 the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  18. Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm

    NARCIS (Netherlands)

    Jansen, R.C.

    A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical

  19. An implementation of continuous genetic algorithm in parameter estimation of predator-prey model

    Science.gov (United States)

    Windarto

    2016-03-01

    Genetic algorithm is an optimization method based on the principles of genetics and natural selection in life organisms. The main components of this algorithm are chromosomes population (individuals population), parent selection, crossover to produce new offspring, and random mutation. In this paper, continuous genetic algorithm was implemented to estimate parameters in a predator-prey model of Lotka-Volterra type. For simplicity, all genetic algorithm parameters (selection rate and mutation rate) are set to be constant along implementation of the algorithm. It was found that by selecting suitable mutation rate, the algorithms can estimate these parameters well.

  20. A simple and efficient parallel FFT algorithm using the BSP model

    NARCIS (Netherlands)

    Bisseling, R.H.; Inda, M.A.

    2000-01-01

    In this paper we present a new parallel radix FFT algorithm based on the BSP model Our parallel algorithm uses the groupcyclic distribution family which makes it simple to understand and easy to implement We show how to reduce the com munication cost of the algorithm by a factor of three in the case

  1. A Cost-Effective Tracking Algorithm for Hypersonic Glide Vehicle Maneuver Based on Modified Aerodynamic Model

    Directory of Open Access Journals (Sweden)

    Yu Fan

    2016-10-01

    Full Text Available In order to defend the hypersonic glide vehicle (HGV, a cost-effective single-model tracking algorithm using Cubature Kalman filter (CKF is proposed in this paper based on modified aerodynamic model (MAM as process equation and radar measurement model as measurement equation. In the existing aerodynamic model, the two control variables attack angle and bank angle cannot be measured by the existing radar equipment and their control laws cannot be known by defenders. To establish the process equation, the MAM for HGV tracking is proposed by using additive white noise to model the rates of change of the two control variables. For the ease of comparison several multiple model algorithms based on CKF are presented, including interacting multiple model (IMM algorithm, adaptive grid interacting multiple model (AGIMM algorithm and hybrid grid multiple model (HGMM algorithm. The performances of these algorithms are compared and analyzed according to the simulation results. The simulation results indicate that the proposed tracking algorithm based on modified aerodynamic model has the best tracking performance with the best accuracy and least computational cost among all tracking algorithms in this paper. The proposed algorithm is cost-effective for HGV tracking.

  2. Enhanced hybrid search algorithm for protein structure prediction using the 3D-HP lattice model.

    Science.gov (United States)

    Zhou, Changjun; Hou, Caixia; Zhang, Qiang; Wei, Xiaopeng

    2013-09-01

    The problem of protein structure prediction in the hydrophobic-polar (HP) lattice model is the prediction of protein tertiary structure. This problem is usually referred to as the protein folding problem. This paper presents a method for the application of an enhanced hybrid search algorithm to the problem of protein folding prediction, using the three dimensional (3D) HP lattice model. The enhanced hybrid search algorithm is a combination of the particle swarm optimizer (PSO) and tabu search (TS) algorithms. Since the PSO algorithm entraps local minimum in later evolution extremely easily, we combined PSO with the TS algorithm, which has properties of global optimization. Since the technologies of crossover and mutation are applied many times to PSO and TS algorithms, so enhanced hybrid search algorithm is called the MCMPSO-TS (multiple crossover and mutation PSO-TS) algorithm. Experimental results show that the MCMPSO-TS algorithm can find the best solutions so far for the listed benchmarks, which will help comparison with any future paper approach. Moreover, real protein sequences and Fibonacci sequences are verified in the 3D HP lattice model for the first time. Compared with the previous evolutionary algorithms, the new hybrid search algorithm is novel, and can be used effectively to predict 3D protein folding structure. With continuous development and changes in amino acids sequences, the new algorithm will also make a contribution to the study of new protein sequences.

  3. Ant colony optimization algorithm for continuous domains based on position distribution model of ant colony foraging.

    Science.gov (United States)

    Liu, Liqiang; Dai, Yuntao; Gao, Jinyu

    2014-01-01

    Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules of ant colony position, and the processing method of constraint condition. Algorithm performance against a set of test trials was unconstrained optimization test functions and a set of optimization test functions, and test results of other algorithms are compared and analyzed to verify the correctness and effectiveness of the proposed algorithm.

  4. Ant Colony Optimization Algorithm for Continuous Domains Based on Position Distribution Model of Ant Colony Foraging

    Directory of Open Access Journals (Sweden)

    Liqiang Liu

    2014-01-01

    Full Text Available Ant colony optimization algorithm for continuous domains is a major research direction for ant colony optimization algorithm. In this paper, we propose a distribution model of ant colony foraging, through analysis of the relationship between the position distribution and food source in the process of ant colony foraging. We design a continuous domain optimization algorithm based on the model and give the form of solution for the algorithm, the distribution model of pheromone, the update rules of ant colony position, and the processing method of constraint condition. Algorithm performance against a set of test trials was unconstrained optimization test functions and a set of optimization test functions, and test results of other algorithms are compared and analyzed to verify the correctness and effectiveness of the proposed algorithm.

  5. An Iterative Algorithm to Build Chinese Language Models

    CERN Document Server

    Luo, X; Luo, Xiaoqiang; Roukos, Salim

    1996-01-01

    We present an iterative procedure to build a Chinese language model (LM). We segment Chinese text into words based on a word-based Chinese language model. However, the construction of a Chinese LM itself requires word boundaries. To get out of the chicken-and-egg problem, we propose an iterative procedure that alternates two operations: segmenting text into words and building an LM. Starting with an initial segmented corpus and an LM based upon it, we use a Viterbi-liek algorithm to segment another set of data. Then, we build an LM based on the second set and use the resulting LM to segment again the first corpus. The alternating procedure provides a self-organized way for the segmenter to detect automatically unseen words and correct segmentation errors. Our preliminary experiment shows that the alternating procedure not only improves the accuracy of our segmentation, but discovers unseen words surprisingly well. The resulting word-based LM has a perplexity of 188 for a general Chinese corpus.

  6. Spatial optimum collocation model of urban land and its algorithm

    Science.gov (United States)

    Kong, Xiangqiang; Li, Xinyun

    2007-06-01

    Optimizing the allocation of urban land is that layout and fix position the various types of land-use in space, maximize the overall benefits of urban space (including economic, social, environment) using a certain method and technique. There is two problems need to deal with in optimizing the allocation of urban land in the technique: one is the quantitative structure, the other is the space structure. In allusion to these problems, according to the principle of spatial coordination, a kind of new optimum collocation model about urban land was put forward in this text. In the model, we give a target function and a set of "soft" constraint conditions, and the area proportions of various types of land-use are restricted to the corresponding allowed scope. Spatial genetic algorithm is used to manipulate and calculate the space of urban land, the optimum spatial collocation scheme can be gradually approached, in which the three basic operations of reproduction, crossover and mutation are all operated on the space. Taking the built-up areas of Jinan as an example, we did the spatial optimum collocation experiment of urban land, the spatial aggregation of various types is better, and an approving result was got.

  7. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  8. A Cluster Algorithm for the 2-D SU(3) × SU(3) Chiral Model

    Science.gov (United States)

    Ji, Da-ren; Zhang, Jian-bo

    1996-07-01

    To extend the cluster algorithm to SU(N) × SU(N) chiral models, a variant version of Wolff's cluster algorithm is proposed and tested for the 2-dimensional SU(3) × SU(3) chiral model. The results show that the new method can reduce the critical slowing down in SU(3) × SU(3) chiral model.

  9. The semi-detached binary system IU Per and its intrinsic oscillation

    Institute of Scientific and Technical Information of China (English)

    Xiao-Bin Zhang; Rong-Xian Zhang; Qi-Sheng Li

    2009-01-01

    We present a long-term time-resolved photometry of the short-period eclipsing binary IU Per. It confirms the intrinsic δ Scuti-like pulsation of the system reported by Kim et al.. With the obtained data, an orbital period study and an eclipsing light curve synthesis based on the Wilson-Devinney method were carried out. The photometric so- lution reveals a semi-detached configuration with the less-massive component filling its own Roche-lobe. By subtracting the eclipsing light changes from the data, we obtained the pure pulsating light curve of the mass-accreting primary component. A Fourier anal- ysis reveals four pulsation modes with confidence larger than 99%. A mode identification based on the results of the photometric solution was made. It suggests that the star may be in radial pulsation with a fundamental period of about 0.0628 d. A brief discussion concerning the evolutionary status and the pulsation nature is finally given.

  10. Oral supplementation with cholecalciferol 800 IU ameliorates albuminuria in Chinese type 2 diabetic patients with nephropathy.

    Directory of Open Access Journals (Sweden)

    Yan Huang

    Full Text Available BACKGROUND: Low vitamin D levels can be associated with albuminuria, and vitamin D analogs are effective anti-proteinuric agents. The aim of this study was to investigate differences in vitamin D levels between those with micro- and those with macroalbuminuria, and to determine whether low dose cholecalciferol increases vitamin D levels and ameliorates albuminuria. METHODS: Two studies were performed in which 25-OH vitamin D(3 (25(OHD(3 concentrations were determined by electrochemiluminescence immunoassay: 1 a cross-sectional study of patients with type 2 diabetes mellitus (T2DM (n = 481 and healthy controls (n = 78; and 2 a longitudinal study of T2DM patients with albuminuria treated with conventional doses, 800 IU, of cholecalciferol for 6 months (n = 22, and a control group (n = 24. RESULTS: 1 Cross-sectional study: Compared to controls and T2DM patients with normoalbuminuria, serum 25(OHD(3 concentrations were significantly lower in patients with macro-albuminuria, but not in those with micro-albuminuria. Serum 25(OHD(3 levels were independently correlated with microalbuminuria. 2 Longitudinal study: Cholecalciferol significantly decreased microalbuminuria in the early stages of treatment, in conjunction with an increase in serum 25(OHD(3 levels. CONCLUSIONS: Low vitamin D levels are common in type 2 diabetic patients with albuminuria, particularly in patients with macroalbuminuria, but not in those with microalbuminuria. Conventional doses of cholecalciferol may have antiproteinuric effects on Chinese type 2 diabetic patients with nephropathy.

  11. Modeling of genetic algorithms with a finite population

    NARCIS (Netherlands)

    C.H.M. van Kemenade

    1997-01-01

    textabstractCross-competition between non-overlapping building blocks can strongly influence the performance of evolutionary algorithms. The choice of the selection scheme can have a strong influence on the performance of a genetic algorithm. This paper describes a number of different genetic

  12. "Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...

  13. Algorithms for a parallel implementation of Hidden Markov Models with a small state space

    DEFF Research Database (Denmark)

    Nielsen, Jesper; Sand, Andreas

    2011-01-01

    Two of the most important algorithms for Hidden Markov Models are the forward and the Viterbi algorithms. We show how formulating these using linear algebra naturally lends itself to parallelization. Although the obtained algorithms are slow for Hidden Markov Models with large state spaces......, they require very little communication between processors, and are fast in practice on models with a small state space. We have tested our implementation against two other imple- mentations on artificial data and observe a speed-up of roughly a factor of 5 for the forward algorithm and more than 6...... for the Viterbi algorithm. We also tested our algorithm in the Coalescent Hidden Markov Model framework, where it gave a significant speed-up....

  14. A Distributed and Deterministic TDMA Algorithm for Write-All-With-Collision Model

    CERN Document Server

    Arumugam, Mahesh

    2008-01-01

    Several self-stabilizing time division multiple access (TDMA) algorithms are proposed for sensor networks. In addition to providing a collision-free communication service, such algorithms enable the transformation of programs written in abstract models considered in distributed computing literature into a model consistent with sensor networks, i.e., write all with collision (WAC) model. Existing TDMA slot assignment algorithms have one or more of the following properties: (i) compute slots using a randomized algorithm, (ii) assume that the topology is known upfront, and/or (iii) assign slots sequentially. If these algorithms are used to transform abstract programs into programs in WAC model then the transformed programs are probabilistically correct, do not allow the addition of new nodes, and/or converge in a sequential fashion. In this paper, we propose a self-stabilizing deterministic TDMA algorithm where a sensor is aware of only its neighbors. We show that the slots are assigned to the sensors in a concu...

  15. Firefly algorithm versus genetic algorithm as powerful variable selection tools and their effect on different multivariate calibration models in spectroscopy: A comparative study

    Science.gov (United States)

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2017-01-01

    For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration.

  16. PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.

    Science.gov (United States)

    Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A

    2016-06-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Parallel algorithms for interactive manipulation of digital terrain models

    Science.gov (United States)

    Davis, E. W.; Mcallister, D. F.; Nagaraj, V.

    1988-01-01

    Interactive three-dimensional graphics applications, such as terrain data representation and manipulation, require extensive arithmetic processing. Massively parallel machines are attractive for this application since they offer high computational rates, and grid connected architectures provide a natural mapping for grid based terrain models. Presented here are algorithms for data movement on the massive parallel processor (MPP) in support of pan and zoom functions over large data grids. It is an extension of earlier work that demonstrated real-time performance of graphics functions on grids that were equal in size to the physical dimensions of the MPP. When the dimensions of a data grid exceed the processing array size, data is packed in the array memory. Windows of the total data grid are interactively selected for processing. Movement of packed data is needed to distribute items across the array for efficient parallel processing. Execution time for data movement was found to exceed that for arithmetic aspects of graphics functions. Performance figures are given for routines written in MPP Pascal.

  18. Unified C/VHDL Model Generation of FPGA-based LHCb VELO algorithms

    CERN Document Server

    Muecke, Manfred

    2007-01-01

    We show an alternative design approach for signal processing algorithms implemented on FPGAs. Instead of writing VHDL code for implementation and maintaining a C-model for algorithm simulation, we derive both models from one common source, allowing generation of synthesizeable VHDL and cycleand bit-accurate C-Code. We have tested our approach on the LHCb VELO pre-processing algorithms and report on experiences gained during the course of our work.

  19. Target Impact Detection Algorithm Using Computer-aided Design (CAD) Model Geometry

    Science.gov (United States)

    2014-09-01

    UNCLASSIFIED AD-E403 558 Technical Report ARMET-TR-13024 TARGET IMPACT DETECTION ALGORITHM USING COMPUTER-AIDED DESIGN ( CAD ...DETECTION ALGORITHM USING COMPUTER-AIDED DESIGN ( CAD ) MODEL GEOMETRY 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...This report documents a method and algorithm to export geometry from a three-dimensional, computer-aided design ( CAD ) model in a format that can be

  20. Using memristor crossbar structure to implement a novel adaptive real time fuzzy modeling algorithm

    OpenAIRE

    Afrakoti, Iman Esmaili Paeen; Shouraki, Saeed Bagheri; Merrikhbayat, Farnood

    2013-01-01

    Although fuzzy techniques promise fast meanwhile accurate modeling and control abilities for complicated systems, different difficulties have been re-vealed in real situation implementations. Usually there is no escape of it-erative optimization based on crisp domain algorithms. Recently memristor structures appeared promising to implement neural network structures and fuzzy algorithms. In this paper a novel adaptive real-time fuzzy modeling algorithm is proposed which uses active learning me...

  1. Adaptation of an Evolutionary Algorithm in Modeling Electric Circuits

    Directory of Open Access Journals (Sweden)

    J. Hájek

    2010-01-01

    Full Text Available This paper describes the influence of setting control parameters of a differential evolutionary algorithm (DE and the influence of adapting these parameters on the simulation of electric circuits and their components. Various DE algorithm strategies are investigated, and also the influence of adapting the controlling parameters (Cr, F during simulation and the effect of sample size. Optimizing an equivalent circuit diagram is chosen as a test task. Several strategies and settings of a DE algorithm are evaluated according to their convergence to the right solution. 

  2. PM Synchronous Motor Dynamic Modeling with Genetic Algorithm ...

    African Journals Online (AJOL)

    Adel

    intelligence like neural network, genetic algorithm, etc (El Shahat and El Shewy, ..... maximum power factor has the most powerful effect on all various machine .... Artificial Intelligence, Renewable Energy, Power System, Control Systems, PV ...

  3. Image Encryption Algorithm Based on Chaotic Economic Model

    Directory of Open Access Journals (Sweden)

    S. S. Askar

    2015-01-01

    Full Text Available In literature, chaotic economic systems have got much attention because of their complex dynamic behaviors such as bifurcation and chaos. Recently, a few researches on the usage of these systems in cryptographic algorithms have been conducted. In this paper, a new image encryption algorithm based on a chaotic economic map is proposed. An implementation of the proposed algorithm on a plain image based on the chaotic map is performed. The obtained results show that the proposed algorithm can successfully encrypt and decrypt the images with the same security keys. The security analysis is encouraging and shows that the encrypted images have good information entropy and very low correlation coefficients and the distribution of the gray values of the encrypted image has random-like behavior.

  4. Combinatorial Clustering Algorithm of Quantum-Behaved Particle Swarm Optimization and Cloud Model

    Directory of Open Access Journals (Sweden)

    Mi-Yuan Shan

    2013-01-01

    Full Text Available We propose a combinatorial clustering algorithm of cloud model and quantum-behaved particle swarm optimization (COCQPSO to solve the stochastic problem. The algorithm employs a novel probability model as well as a permutation-based local search method. We are setting the parameters of COCQPSO based on the design of experiment. In the comprehensive computational study, we scrutinize the performance of COCQPSO on a set of widely used benchmark instances. By benchmarking combinatorial clustering algorithm with state-of-the-art algorithms, we can show that its performance compares very favorably. The fuzzy combinatorial optimization algorithm of cloud model and quantum-behaved particle swarm optimization (FCOCQPSO in vague sets (IVSs is more expressive than the other fuzzy sets. Finally, numerical examples show the clustering effectiveness of COCQPSO and FCOCQPSO clustering algorithms which are extremely remarkable.

  5. An implicit algorithm for a rate-dependent ductile failure model

    Science.gov (United States)

    Zuo, Q. H.; Rice, Jeremy R.

    2008-10-01

    An implicit numerical algorithm has been developed for a rate-dependent model for damage and failure of ductile materials under high-rate dynamic loading [F. L. Addessio and J. N. Johnson, J. Appl. Phys. 74, 1640 (1993)]. Over each time step, the algorithm first implicitly determines the equilibrium state on a Gurson surface, and then calculates the final state by solving viscous relaxation equations, also implicitly. Numerical examples are given to demonstrate the key features of the algorithm. Compared to the explicit algorithm used previously, the current algorithm allows significantly larger time steps that can be used in the analysis. As the viscosity of the material vanishes, the results of the rate-dependent model are shown here to converge to that of the corresponding rate-independent model, a result not achieved with the explicit algorithm.

  6. The Evaluation Model About the Result of Enterprise Technological Innovation Based on DAGF Algorithm

    Institute of Scientific and Technical Information of China (English)

    LikeMao; ZigangZhang

    2004-01-01

    Based on DAGF Algorithm, an evaluation model about the result of enterprise's technological innovation is proposed. Furthermore, establishment of its system of evaluation indicators and DAGF Algorithm are discussed in detail. Besides, the result of the case shows that the model is fit for evaluation of the result of enterprise's technological innovation.

  7. Dual geometric worm algorithm for two-dimensional discrete classical lattice models

    Science.gov (United States)

    Hitchcock, Peter; Sørensen, Erik S.; Alet, Fabien

    2004-07-01

    We present a dual geometrical worm algorithm for two-dimensional Ising models. The existence of such dual algorithms was first pointed out by Prokof’ev and Svistunov [N. Prokof’ev and B. Svistunov, Phys. Rev. Lett. 87, 160601 (2001)]. The algorithm is defined on the dual lattice and is formulated in terms of bond variables and can therefore be generalized to other two-dimensional models that can be formulated in terms of bond variables. We also discuss two related algorithms formulated on the direct lattice, applicable in any dimension. These latter algorithms turn out to be less efficient but of considerable intrinsic interest. We show how such algorithms quite generally can be “directed” by minimizing the probability for the worms to erase themselves. Explicit proofs of detailed balance are given for all the algorithms. In terms of computational efficiency the dual geometrical worm algorithm is comparable to well known cluster algorithms such as the Swendsen-Wang and Wolff algorithms, however, it is quite different in structure and allows for a very simple and efficient implementation. The dual algorithm also allows for a very elegant way of calculating the domain wall free energy.

  8. A comparison of computational efficiencies of stochastic algorithms in terms of two infection models.

    Science.gov (United States)

    Banks, H Thomas; Hu, Shuhua; Joyner, Michele; Broido, Anna; Canter, Brandi; Gayvert, Kaitlyn; Link, Kathryn

    2012-07-01

    In this paper, we investigate three particular algorithms: a stochastic simulation algorithm (SSA), and explicit and implicit tau-leaping algorithms. To compare these methods, we used them to analyze two infection models: a Vancomycin-resistant enterococcus (VRE) infection model at the population level, and a Human Immunodeficiency Virus (HIV) within host infection model. While the first has a low species count and few transitions, the second is more complex with a comparable number of species involved. The relative efficiency of each algorithm is determined based on computational time and degree of precision required. The numerical results suggest that all three algorithms have the similar computational efficiency for the simpler VRE model, and the SSA is the best choice due to its simplicity and accuracy. In addition, we have found that with the larger and more complex HIV model, implementation and modification of tau-Leaping methods are preferred.

  9. Prediction and Research on Vegetable Price Based on Genetic Algorithm and Neural Network Model

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Considering the complexity of vegetables price forecast,the prediction model of vegetables price was set up by applying the neural network based on genetic algorithm and using the characteristics of genetic algorithm and neural work.Taking mushrooms as an example,the parameters of the model are analyzed through experiment.In the end,the results of genetic algorithm and BP neural network are compared.The results show that the absolute error of prediction data is in the scale of 10%;in the scope that the absolute error in the prediction data is in the scope of 20% and 15%.The accuracy of genetic algorithm based on neutral network is higher than the BP neutral network model,especially the absolute error of prediction data is within the scope of 20%.The accuracy of genetic algorithm based on neural network is obviously better than BP neural network model,which represents the favorable generalization capability of the model.

  10. Pharmacokinetics of a single oral dose of vitamin D3 (70,000 IU in pregnant and non-pregnant women

    Directory of Open Access Journals (Sweden)

    Roth Daniel E

    2012-12-01

    Full Text Available Abstract Background Improvements in antenatal vitamin D status may have maternal-infant health benefits. To inform the design of prenatal vitamin D3 trials, we conducted a pharmacokinetic study of single-dose vitamin D3 supplementation in women of reproductive age. Methods A single oral vitamin D3 dose (70,000 IU was administered to 34 non-pregnant and 27 pregnant women (27 to 30 weeks gestation enrolled in Dhaka, Bangladesh (23°N. The primary pharmacokinetic outcome measure was the change in serum 25-hydroxyvitamin D concentration over time, estimated using model-independent pharmacokinetic parameters. Results Baseline mean serum 25-hydroxyvitamin D concentration was 54 nmol/L (95% CI 47, 62 in non-pregnant participants and 39 nmol/L (95% CI 34, 45 in pregnant women. Mean peak rise in serum 25-hydroxyvitamin D concentration above baseline was similar in non-pregnant and pregnant women (28 nmol/L and 32 nmol/L, respectively. However, the rate of rise was slightly slower in pregnant women (i.e., lower 25-hydroxyvitamin D on day 2 and higher 25-hydroxyvitamin D on day 21 versus non-pregnant participants. Overall, average 25-hydroxyvitamin D concentration was 19 nmol/L above baseline during the first month. Supplementation did not induce hypercalcemia, and there were no supplement-related adverse events. Conclusions The response to a single 70,000 IU dose of vitamin D3 was similar in pregnant and non-pregnant women in Dhaka and consistent with previous studies in non-pregnant adults. These preliminary data support the further investigation of antenatal vitamin D3 regimens involving doses of ≤70,000 IU in regions where maternal-infant vitamin D deficiency is common. Trial registration ClinicalTrials.gov (NCT00938600

  11. A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials

    Energy Technology Data Exchange (ETDEWEB)

    Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A

    2008-12-04

    We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.

  12. Models based on "out-of Kilter" algorithm

    Science.gov (United States)

    Adler, M. J.; Drobot, R.

    2012-04-01

    In case of many water users along the river stretches, it is very important, in case of low flows and droughty periods to develop an optimization model for water allocation, to cover all needs under certain predefined constraints, depending of the Contingency Plan for drought management. Such a program was developed during the implementation of the WATMAN Project, in Romania (WATMAN Project, 2005-2006, USTDA) for Arges-Dambovita-Ialomita Basins water transfers. This good practice was proposed for WATER CoRe Project- Good Practice Handbook for Drought Management, (InterregIVC, 2011), to be applied for the European Regions. Two types of simulation-optimization models based on an improved version of out-of-kilter algorithm as optimization technique have been developed and used in Romania: • models for founding of the short-term operation of a WMS, • models generically named SIMOPT that aim to the analysis of long-term WMS operation and have as the main results the statistical WMS functional parameters. A real WMS is modeled by an arcs-nodes network so the real WMS operation problem becomes a problem of flows in networks. The nodes and oriented arcs as well as their characteristics such as lower and upper limits and associated costs are the direct analog of the physical and operational WMS characteristics. Arcs represent both physical and conventional elements of WMS such as river branches, channels or pipes, water user demands or other water management requirements, trenches of water reservoirs volumes, water levels in channels or rivers, nodes are junctions of at least two arcs and stand for locations of lakes or water reservoirs and/or confluences of river branches, water withdrawal or wastewater discharge points, etc. Quantitative features of water resources, water users and water reservoirs or other water works are expressed as constraints of non-violating the lower and upper limits assigned on arcs. Options of WMS functioning i.e. water retention/discharge in

  13. Modeling and performance analysis of GPS vector tracking algorithms

    Science.gov (United States)

    Lashley, Matthew

    This dissertation provides a detailed analysis of GPS vector tracking algorithms and the advantages they have over traditional receiver architectures. Standard GPS receivers use a decentralized architecture that separates the tasks of signal tracking and position/velocity estimation. Vector tracking algorithms combine the two tasks into a single algorithm. The signals from the various satellites are processed collectively through a Kalman filter. The advantages of vector tracking over traditional, scalar tracking methods are thoroughly investigated. A method for making a valid comparison between vector and scalar tracking loops is developed. This technique avoids the ambiguities encountered when attempting to make a valid comparison between tracking loops (which are characterized by noise bandwidths and loop order) and the Kalman filters (which are characterized by process and measurement noise covariance matrices) that are used by vector tracking algorithms. The improvement in performance offered by vector tracking is calculated in multiple different scenarios. Rule of thumb analysis techniques for scalar Frequency Lock Loops (FLL) are extended to the vector tracking case. The analysis tools provide a simple method for analyzing the performance of vector tracking loops. The analysis tools are verified using Monte Carlo simulations. Monte Carlo simulations are also used to study the effects of carrier to noise power density (C/N0) ratio estimation and the advantage offered by vector tracking over scalar tracking. The improvement from vector tracking ranges from 2.4 to 6.2 dB in various scenarios. The difference in the performance of the three vector tracking architectures is analyzed. The effects of using a federated architecture with and without information sharing between the receiver's channels are studied. A combination of covariance analysis and Monte Carlo simulation is used to analyze the performance of the three algorithms. The federated algorithm without

  14. Split Bregman Iteration Algorithm for Image Deblurring Using Fourth-Order Total Bounded Variation Regularization Model

    Directory of Open Access Journals (Sweden)

    Yi Xu

    2013-01-01

    Full Text Available We propose a fourth-order total bounded variation regularization model which could reduce undesirable effects effectively. Based on this model, we introduce an improved split Bregman iteration algorithm to obtain the optimum solution. The convergence property of our algorithm is provided. Numerical experiments show the more excellent visual quality of the proposed model compared with the second-order total bounded variation model which is proposed by Liu and Huang (2010.

  15. Discrete channel modelling based on genetic algorithm and simulated annealing for training hidden Markov model

    Institute of Scientific and Technical Information of China (English)

    Zhao Zhi-Jin; Zheng Shi-Lian; Xu Chun-Yun; Kong Xian-Zheng

    2007-01-01

    Hidden Markov models (HMMs) have been used to model burst error sources of wireless channels. This paper proposes a hybrid method of using genetic algorithm (GA) and simulated annealing (SA) to train HMM for discrete channel modelling. The proposed method is compared with pure GA, and experimental results show that the HMMs trained by the hybrid method can better describe the error sequences due to SA's ability of facilitating hill-climbing at the later stage of the search. The burst error statistics of the HMMs trained by the proposed method and the corresponding error sequences are also presented to validate the proposed method.

  16. A spatially constrained generative model and an EM algorithm for image segmentation.

    Science.gov (United States)

    Diplaros, Aristeidis; Vlassis, Nikos; Gevers, Theo

    2007-05-01

    In this paper, we present a novel spatially constrained generative model and an expectation-maximization (EM) algorithm for model-based image segmentation. The generative model assumes that the unobserved class labels of neighboring pixels in the image are generated by prior distributions with similar parameters, where similarity is defined by entropic quantities relating to the neighboring priors. In order to estimate model parameters from observations, we derive a spatially constrained EM algorithm that iteratively maximizes a lower bound on the data log-likelihood, where the penalty term is data-dependent. Our algorithm is very easy to implement and is similar to the standard EM algorithm for Gaussian mixtures with the main difference that the labels posteriors are "smoothed" over pixels between each E- and M-step by a standard image filter. Experiments on synthetic and real images show that our algorithm achieves competitive segmentation results compared to other Markov-based methods, and is in general faster.

  17. Model predictive control algorithms and their application to a continuous fermenter

    Directory of Open Access Journals (Sweden)

    R. G. SILVA

    1999-06-01

    Full Text Available In many continuous fermentation processes, the control objective is to maximize productivity per unit time. The optimum operational point in the steady state can be obtained by maximizing the productivity rate using feed substrate concentration as the independent variable with the equations of the static model as constraints. In the present study, three model-based control schemes have been developed and implemented for a continuous fermenter. The first method modifies the well-known dynamic matrix control (DMC algorithm by making it adaptive. The other two use nonlinear model predictive control algorithms (NMPC, nonlinear model predictive control for calculation of control actions. The NMPC1 algorithm, which uses orthogonal collocation in finite elements, acted similar to NMPC2, which uses equidistant collocation. These algorithms are compared with DMC. The results obtained show the good performance of nonlinear algorithms.

  18. A predictor-corrector algorithm to estimate the fractional flow in oil-water models

    Energy Technology Data Exchange (ETDEWEB)

    Savioli, Gabriela B [Laboratorio de IngenierIa de Reservorios, IGPUBA and Departamento de IngenierIa Quimica, Facultad de IngenierIa, Universidad de Buenos Aires, Av. Las Heras 2214 Piso 3 C1127AAR Buenos Aires (Argentina); Berdaguer, Elena M Fernandez [Instituto de Calculo, Facultad de Ciencias Exactas y Naturales, UBA-CONICET and Departamento de Matematica, Facultad de IngenierIa, Universidad de Buenos Aires, 1428 Buenos Aires (Argentina)], E-mail: gsavioli@di.fcen.uba.ar, E-mail: efernan@ic.fcen.uba.ar

    2008-11-01

    We introduce a predictor-corrector algorithm to estimate parameters in a nonlinear hyperbolic problem. It can be used to estimate the oil-fractional flow function from the Buckley-Leverett equation. The forward model is non-linear: the sought- for parameter is a function of the solution of the equation. Traditionally, the estimation of functions requires the selection of a fitting parametric model. The algorithm that we develop does not require a predetermined parameter model. Therefore, the estimation problem is carried out over a set of parameters which are functions. The algorithm is based on the linearization of the parameter-to-output mapping. This technique is new in the field of nonlinear estimation. It has the advantage of laying aside parametric models. The algorithm is iterative and is of predictor-corrector type. We present theoretical results on the inverse problem. We use synthetic data to test the new algorithm.

  19. Robust Optimization Model and Algorithm for Railway Freight Center Location Problem in Uncertain Environment

    Directory of Open Access Journals (Sweden)

    Xing-cai Liu

    2014-01-01

    Full Text Available Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  20. ADVANCED LIVER INJURY IN PATIENTS WITH CHRONIC HEPATITIS B AND VIRAL LOAD BELOW 2,000 IU/mL

    Science.gov (United States)

    de OLIVEIRA, Valter Oberdan Borges; OLIVEIRA, Juliana Passos Rocha; de FRANÇA, Eloy Vianey Carvalho; BRITO, Hugo Leite de Farias; NASCIMENTO, Tereza Virgínia; FRANÇA, Alex

    2016-01-01

    SUMMARY Introduction: According to the guidelines, the viral load of 2,000 IU/mL is considered the level to differentiate between inactive carriers and HBeAg(-) chronic hepatitis B patients. Even so, liver damage may be present in patients with lower viral load levels, mainly related to regional variations. This study aims to verify the presence of liver injury in patients with viral load below 2,000 IU/mL. Methods: Patients presenting HBsAg(+) for more than six months, Anti-HBe(+)/HBeAg(-), viral load below 2,000 IU/mL and serum ALT levels less than twice the upper limit of normality underwent liver biopsy. Clinical and laboratory characteristics were evaluated in relation to the degree of histologic alteration. Liver injury was considered advanced when F ≥ 2 and/or A ≥ 2 by the METAVIR classification. Results: 11/27 (40.7%) patients had advanced liver injury, with a mean viral load of 701.0 (± 653.7) IU/mL versus 482.8 (± 580.0) IU/mL in patients with mild injury. The comparison between the mean values of the two groups did not find a statistical difference (p = 0.37). The average of serum aminotransferases was not able to differentiate light liver injury from advanced injury. Conclusions: In this study, one evaluation of viral load did not exclude the presence of advanced liver damage. Pathologic assessment is an important tool to diagnose advanced liver damage and should be performed in patients with a low viral load to indicate early antiviral treatment. PMID:27680170

  1. ADVANCED LIVER INJURY IN PATIENTS WITH CHRONIC HEPATITIS B AND VIRAL LOAD BELOW 2,000 IU/mL

    Directory of Open Access Journals (Sweden)

    Valter Oberdan Borges de OLIVEIRA

    Full Text Available SUMMARY Introduction: According to the guidelines, the viral load of 2,000 IU/mL is considered the level to differentiate between inactive carriers and HBeAg(- chronic hepatitis B patients. Even so, liver damage may be present in patients with lower viral load levels, mainly related to regional variations. This study aims to verify the presence of liver injury in patients with viral load below 2,000 IU/mL. Methods: Patients presenting HBsAg(+ for more than six months, Anti-HBe(+/HBeAg(-, viral load below 2,000 IU/mL and serum ALT levels less than twice the upper limit of normality underwent liver biopsy. Clinical and laboratory characteristics were evaluated in relation to the degree of histologic alteration. Liver injury was considered advanced when F ≥ 2 and/or A ≥ 2 by the METAVIR classification. Results: 11/27 (40.7% patients had advanced liver injury, with a mean viral load of 701.0 (± 653.7 IU/mL versus 482.8 (± 580.0 IU/mL in patients with mild injury. The comparison between the mean values of the two groups did not find a statistical difference (p = 0.37. The average of serum aminotransferases was not able to differentiate light liver injury from advanced injury. Conclusions: In this study, one evaluation of viral load did not exclude the presence of advanced liver damage. Pathologic assessment is an important tool to diagnose advanced liver damage and should be performed in patients with a low viral load to indicate early antiviral treatment.

  2. Single-cluster algorithm for the site-bond-correlated Ising model

    Science.gov (United States)

    Campos, P. R. A.; Onody, R. N.

    1997-12-01

    We extend the Wolff algorithm to include correlated spin interactions in diluted magnetic systems. This algorithm is applied to study the site-bond-correlated Ising model on a two-dimensional square lattice. We use a finite-size scaling procedure to obtain the phase diagram in the temperature-concentration space. We also have verified that the autocorrelation time diminishes in the presence of dilution and correlation, showing that the Wolff algorithm performs even better in such situations.

  3. The Fuzzy Modeling Algorithm for Complex Systems Based on Stochastic Neural Network

    Institute of Scientific and Technical Information of China (English)

    李波; 张世英; 李银惠

    2002-01-01

    A fuzzy modeling method for complex systems is studied. The notation of general stochastic neural network (GSNN) is presented and a new modeling method is given based on the combination of the modified Takagi and Sugeno's(MTS) fuzzy model and one-order GSNN. Using expectation-maximization (EM) algorithm, parameter estimation and model selection procedures are given. It avoids the shortcomings brought by other methods such as BP algorithm, when the number of parameters is large, BP algorithm is still difficult to apply directly without fine tuning and subjective tinkering. Finally, the simulated example demonstrates the effectiveness.

  4. Parameter Identification of Equivalent Circuit Models for Li-ion Batteries Based on Tree Seeds Algorithm

    Science.gov (United States)

    Chen, W. J.; Tan, X. J.; Cai, M.

    2017-07-01

    Parameter identification method of equivalent circuit models for Li-ion batteries using the advanced tree seeds algorithm is proposed. On one hand, since the electrochemical models are not suitable for the design of battery management system, the equivalent circuit models are commonly adopted for on-board applications. On the other hand, by building up the objective function for optimization, the tree seeds algorithm can be used to identify the parameters of equivalent circuit models. Experimental verifications under different profiles demonstrate the suggested method can achieve a better result with lower complexity, more accuracy and robustness, which make it a reasonable alternative for other identification algorithms.

  5. Algorithms and Methods for High-Performance Model Predictive Control

    DEFF Research Database (Denmark)

    Frison, Gianluca

    routines employed in the numerical tests. The main focus of this thesis is on linear MPC problems. In this thesis, both the algorithms and their implementation are equally important. About the implementation, a novel implementation strategy for the dense linear algebra routines in embedded optimization...... is proposed, aiming at improving the computational performance in case of small matrices. About the algorithms, they are built on top of the proposed linear algebra, and they are tailored to exploit the high-level structure of the MPC problems, with special care on reducing the computational complexity....

  6. Hammerstein Model Based RLS Algorithm for Modeling the Intelligent Pneumatic Actuator (IPA System

    Directory of Open Access Journals (Sweden)

    Siti Fatimah Sulaiman

    2017-08-01

    Full Text Available An Intelligent Pneumatic Actuator (IPA system is considered highly nonlinear and subject to nonlinearities which make the precise position control of this actuator is difficult to achieve. Thus, it is appropriate to model the system using nonlinear approach because the linear model sometimes not sufficient enough to represent the nonlinearity of the system in the real process. This study presents a new modeling of an IPA system using Hammerstein model based Recursive Least Square (RLS algorithm. The Hammerstein model is one of the blocks structured nonlinear models often used to model a nonlinear system and it consists of a static nonlinear block followed by a linear block of dynamic element. In this study, the static nonlinear block was represented by a deadzone of the pneumatic valve, while the linear block was represented by a dynamic element of IPA system. A RLS has been employed as the main algorithm in order to estimate the parameters of the Hammerstein model. The validity of the proposed model has been verified by conducting a real-time experiment. All of the criteria as outlined in the system identification’s procedures were successfully complied by the proposed Hammerstein model as it managed to provide a stable system, higher best fit, lower loss function and lower final prediction error than a linear model developed before. The performance of the proposed Hammerstein model in controlling the IPA’s positioning system is also considered good. Thus, this new developed Hammerstein model is sufficient enough to represents the IPA system utilized in this study.

  7. Effective quantum Monte Carlo algorithm for modeling strongly correlated systems

    NARCIS (Netherlands)

    Kashurnikov, V. A.; Krasavin, A. V.

    2007-01-01

    A new effective Monte Carlo algorithm based on principles of continuous time is presented. It allows calculating, in an arbitrary discrete basis, thermodynamic quantities and linear response of mixed boson-fermion, spin-boson, and other strongly correlated systems which admit no analytic description

  8. Model Predictive Control Algorithms for Pen and Pump Insulin Administration

    DEFF Research Database (Denmark)

    Boiroux, Dimitri

    (OCP) is solved using a multiple-shooting based algorithm. We use an explicit Runge-Kutta method (DOPRI45) with an adaptive stepsize for numerical integration and sensitivity computation. The OCP is solved using a Quasi-Newton sequential quadratic programming (SQP) with a linesearch and a BFGS update...

  9. Comparison of Model-Based Segmentation Algorithms for Color Images.

    Science.gov (United States)

    1987-03-01

    image. Hunt and Kubler [Ref. 3] found that for image restoration, Karhunen-Loive transformation followed by single channel image processing worked...Algorithm for Segmentation of Multichannel Images. M.S.Thesis, Naval Postgraduate School, Monterey, CaliFornia, December 1993. 3. Hunt, B.R., Kubler 0

  10. Modelling and genetic algorithm based optimisation of inverse supply chain

    Science.gov (United States)

    Bányai, T.

    2009-04-01

    (Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a

  11. A fast iterative recursive least squares algorithm for Wiener model identification of highly nonlinear systems.

    Science.gov (United States)

    Kazemi, Mahdi; Arefi, Mohammad Mehdi

    2016-12-15

    In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used.

  12. Dynamic Critical Behaviour of Wolff's Algorithm for $RP^N$ $\\sigma$-Models

    CERN Document Server

    Caracciolo, Sergio; Pelissetto, A; Sokal, A D

    1992-01-01

    We study the performance of a Wolff-type embedding algorithm for $RP^N$ $\\sigma$-models. We find that the algorithm in which we update the embedded Ising model \\`a la Swendsen-Wang has critical slowing-down as $z_\\chi \\approx 1$. If instead we update the Ising spins with a perfect algorithm which at every iteration produces a new independent configuration, we obtain $z_\\chi \\approx 0$. This shows that the Ising embedding encodes well the collective modes of the system, and that the behaviour of the first algorithm is connected to the poor performance of the Swendsen-Wang algorithm in dealing with a frustrated Ising model.

  13. Global Convergence of the EM Algorithm for Unconstrained Latent Variable Models with Categorical Indicators

    Science.gov (United States)

    Weissman, Alexander

    2013-01-01

    Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…

  14. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    Science.gov (United States)

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  15. Critical Dynamics Behavior of the Wolff Algorithm in the Site-Bond-Correlated Ising Model

    Science.gov (United States)

    Campos, P. R. A.; Onody, R. N.

    Here we apply the Wolff single-cluster algorithm to the site-bond-correlated Ising model and study its critical dynamical behavior. We have verified that the autocorrelation time diminishes in the presence of dilution and correlation, showing that the Wolff algorithm performs even better in such situations. The critical dynamical exponents are also estimated.

  16. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    Science.gov (United States)

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  17. Covariance Structure Model Fit Testing under Missing Data: An Application of the Supplemented EM Algorithm

    Science.gov (United States)

    Cai, Li; Lee, Taehun

    2009-01-01

    We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…

  18. A Decomposition Algorithm for Mean-Variance Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    This paper presents a decomposition algorithm for solving the optimal control problem (OCP) that arises in Mean-Variance Economic Model Predictive Control of stochastic linear systems. The algorithm applies the alternating direction method of multipliers to a reformulation of the OCP...

  19. Atmosphere Clouds Model Algorithm for Solving Optimal Reactive Power Dispatch Problem

    Directory of Open Access Journals (Sweden)

    Lenin Kanagasabai

    2014-04-01

    Full Text Available In this paper, a new method, called Atmosphere Clouds Model (ACM algorithm, used for solving optimal reactive power dispatch problem. ACM stochastic optimization algorithm stimulated from the behavior of cloud in the natural earth. ACM replicate the generation behavior, shift behavior and extend behavior of cloud. The projected (ACM algorithm has been tested on standard IEEE 30 bus test system and simulation results shows clearly about the superior performance of the proposed algorithm in plummeting the real power loss. Normal 0 false false false EN-IN X-NONE X-NONE

  20. Event-chain algorithm for the Heisenberg model: Evidence for z ≃1 dynamic scaling

    Science.gov (United States)

    Nishikawa, Yoshihiko; Michel, Manon; Krauth, Werner; Hukushima, Koji

    2015-12-01

    We apply the event-chain Monte Carlo algorithm to the three-dimensional ferromagnetic Heisenberg model. The algorithm is rejection-free and also realizes an irreversible Markov chain that satisfies global balance. The autocorrelation functions of the magnetic susceptibility and the energy indicate a dynamical critical exponent z ≈1 at the critical temperature, while that of the magnetization does not measure the performance of the algorithm. We show that the event-chain Monte Carlo algorithm substantially reduces the dynamical critical exponent from the conventional value of z ≃2 .

  1. Event-chain algorithm for the Heisenberg model: Evidence for z≃1 dynamic scaling.

    Science.gov (United States)

    Nishikawa, Yoshihiko; Michel, Manon; Krauth, Werner; Hukushima, Koji

    2015-12-01

    We apply the event-chain Monte Carlo algorithm to the three-dimensional ferromagnetic Heisenberg model. The algorithm is rejection-free and also realizes an irreversible Markov chain that satisfies global balance. The autocorrelation functions of the magnetic susceptibility and the energy indicate a dynamical critical exponent z≈1 at the critical temperature, while that of the magnetization does not measure the performance of the algorithm. We show that the event-chain Monte Carlo algorithm substantially reduces the dynamical critical exponent from the conventional value of z≃2.

  2. Parameter Optimization of Single-Diode Model of Photovoltaic Cell Using Memetic Algorithm

    Directory of Open Access Journals (Sweden)

    Yourim Yoon

    2015-01-01

    Full Text Available This study proposes a memetic approach for optimally determining the parameter values of single-diode-equivalent solar cell model. The memetic algorithm, which combines metaheuristic and gradient-based techniques, has the merit of good performance in both global and local searches. First, 10 single algorithms were considered including genetic algorithm, simulated annealing, particle swarm optimization, harmony search, differential evolution, cuckoo search, least squares method, and pattern search; then their final solutions were used as initial vectors for generalized reduced gradient technique. From this memetic approach, we could further improve the accuracy of the estimated solar cell parameters when compared with single algorithm approaches.

  3. A new grid-associated algorithm in the distributed hydrological model simulations

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    This paper presents a new grid-associated algorithm to improve the performance of a D8 algorithm based distributed hydrological model computation.The algorithm is based on the well known single-flow D8 algorithm of grid flow.This algorithm allocates calculation priorities according to the distance between the units and the outlet,then carries out the ergodic computations of the hydrological units according to the priority division.For the parallelized algorithm,a standard thread-level shared memory system for parallel programming(OpenMP-Open specifications for Multi Processing) was introduced,and the parallel coding was implemented in C lan-guage.A case study showed that the absolute speed-up ratio of the grid-associated algorithm is 1.64 over the original D8 algorithm,and the linear speed-up ratio of the parallel associated algorithm is 2.42 under 4 cores.The parallel grid-associated algorithm can be applied to a variety of research fields that use the grid method.

  4. Using frame correlation algorithm in a duration distribution based hidden Markov model

    Institute of Scientific and Technical Information of China (English)

    王作英; 崔小东

    2000-01-01

    The assumption of frame independence is a widely known weakness of traditional hidden Markov model (HMM). In this paper, a frame correlation algorithm based on the duration distribution based hidden Markov model (DDBHMM) is proposed. In the algorithm, an AR model is used to depict the low pass effect of vocal tract from which stems the inertia leading to frame correlation. In the preliminary experiment of middle vocabulary speaker dependent isolated word recognition, our frame correlation algorithm outperforms the frame independent one. The average error reduction is about 20% .

  5. An algorithm for solving new trust region subproblem with conic model

    Institute of Scientific and Technical Information of China (English)

    WANG JianYu; NI Qin

    2008-01-01

    The new trust region subproblem with the conic model was proposed in 2005, and was divided into three different cases. The first two cases can be converted into a quadratic model or a convex problem with quadratic constraints, while the third one is a nonconvex problem. In this paper, first we analyze the nonconvex problem, and reduce it to two convex problems. Then we discuss some dual properties of these problems and give an algorithm for solving them. At last, we present an algorithm for solving the new trust region subproblem with the conic model and report some numerical examples to illustrate the efficiency of the algorithm.

  6. Image reconstruction algorithms for electrical capacitance tomography based on ROF model using new numerical techniques

    Science.gov (United States)

    Chen, Jiaoxuan; Zhang, Maomao; Liu, Yinyan; Chen, Jiaoliao; Li, Yi

    2017-03-01

    Electrical capacitance tomography (ECT) is a promising technique applied in many fields. However, the solutions for ECT are not unique and highly sensitive to the measurement noise. To remain a good shape of reconstructed object and endure a noisy data, a Rudin–Osher–Fatemi (ROF) model with total variation regularization is applied to image reconstruction in ECT. Two numerical methods, which are simplified augmented Lagrangian (SAL) and accelerated alternating direction method of multipliers (AADMM), are innovatively introduced to try to solve the above mentioned problems in ECT. The effect of the parameters and the number of iterations for different algorithms, and the noise level in capacitance data are discussed. Both simulation and experimental tests were carried out to validate the feasibility of the proposed algorithms, compared to the Landweber iteration (LI) algorithm. The results show that the SAL and AADMM algorithms can handle a high level of noise and the AADMM algorithm outperforms other algorithms in identifying the object from its background.

  7. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    Science.gov (United States)

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  8. Optimization of fused deposition modeling process using teaching-learning-based optimization algorithm

    Directory of Open Access Journals (Sweden)

    R. Venkata Rao

    2016-03-01

    Full Text Available The performance of rapid prototyping (RP processes is often measured in terms of build time, product quality, dimensional accuracy, cost of production, mechanical and tribological properties of the models and energy consumed in the process. The success of any RP process in terms of these performance measures entails selection of the optimum combination of the influential process parameters. Thus, in this work the single-objective and multi-objective optimization problems of a widely used RP process, namely, fused deposition modeling (FDM, are formulated, and the same are solved using the teaching-learning-based optimization (TLBO algorithm and non-dominated Sorting TLBO (NSTLBO algorithm, respectively. The results of the TLBO algorithm are compared with those obtained using genetic algorithm (GA, and quantum behaved particle swarm optimization (QPSO algorithm. The TLBO algorithm showed better performance as compared to GA and QPSO algorithms. The NSTLBO algorithm proposed to solve the multi-objective optimization problems of the FDM process in this work is a posteriori version of the TLBO algorithm. The NSTLBO algorithm is incorporated with non-dominated sorting concept and crowding distance assignment mechanism to obtain a dense set of Pareto optimal solutions in a single simulation run. The results of the NSTLBO algorithm are compared with those obtained using non-dominated sorting genetic algorithm (NSGA-II and the desirability function approach. The Pareto-optimal set of solutions for each problem is obtained and reported. These Pareto-optimal set of solutions will help the decision maker in volatile scenarios and are useful for the FDM process.

  9. Field Trails Efficacy of 16 000 IU/mg Bacillus thuringiensis SC on Cnaphalocrocis medinalis Guenee%16 000 IU/mg苏云金杆菌悬浮剂对稻纵卷叶螟的防治效果

    Institute of Scientific and Technical Information of China (English)

    曹春霞; 王沫; 龙同; 雷海瑞; 程贤亮

    2012-01-01

    The field trials of 16 000 IU/mg Bacillus thuringiensis SC on Cnaphalocrocis medinalis Guenee were carried out. The results showed that 16 000 IU/mg Bacillus thuringiensis SC had good control effect 12 d after three treatments (750, 1 500, 2 250 g/hm2),the efficacy was 83.73%,87.73%,93.50% respectively. It could be used in the production of green and organic agriculture.%试验结果表明,16 000 IU/mg苏云金芽孢杆菌(Bacillus thuringiensis)悬浮剂对稻纵卷叶螟(Cnaphalocrocis medinalis Guenee)有较好的防效,750、1 500、2 250 g/hm2 3个处理浓度药后12d的防效分别为83.73%、87.73%、93.50%,可用于绿色及有机农业的生产.

  10. SPICE Modeling and Simulation of a MPPT Algorithm

    Directory of Open Access Journals (Sweden)

    Miona Andrejević Stošović

    2014-06-01

    Full Text Available One among several equally important subsystems of a standalone photovoltaic (PV system is the circuit for maximum power point tracking (MPPT. There are several algorithms that may be used for it. In this paper we choose such an algorithm based on the maximum simplicity criteria. Then we make some small modifications to it in order to make it more robust. We synthesize a circuit built out of elements from the list of elements recognized by SPICE. The inputs are the voltage and the current at the PV panel to DC-DC converter interface. Its task is to generate a pulse width modulated pulse train whose duty ratio is defined to keep the input impedance of the DC-DC converter at the optimal value.

  11. Interchanges Safety: Forecast Model Based on ISAT Algorithm

    Directory of Open Access Journals (Sweden)

    Sascia Canale

    2013-09-01

    Full Text Available The ISAT algorithm (Interchange Safety Analysis Tool, developed by the Federal Highway Administration (FHWA, provides design and safety engineers with an automated tool for assessing the safety effects of geometric design and traffic control features at an existing interchange and adjacent roadway network. Concerning the default calibration coefficients and crash distributions by severity and type, the user should modify these default values to more accurately reflect the safety experience of their local/State agency prior to using ISAT to perform actual safety assessments. This paper will present the calibration process of the FHWA algorithm to the local situation of Oriental Sicily. The aim is to realize an instrument for accident forecast analyses, useful to Highway Managers, in order to individuate those infrastructural elements that can contribute to improve the safety level of interchange areas, if suitably calibrated.

  12. Algorithm Development for the Two-Fluid Plasma Model

    Science.gov (United States)

    2009-02-17

    of m=0 sausage instabilities in an axisymmetric Z-pinch", Physics of Plasmas 13, 082310 (2006). • A. Hakim and U. Shumlak, "Two-fluid physics and...accurate as the solution variables. The high-order representation of the solution variables satisfies the accuracy requirement to preserve the...here. [2] It also illustrates the dispersive nature of the waves which makes capturing the effect difficult in MHD algorithms. The electromagnetic

  13. Model-based multiobjective evolutionary algorithm optimization for HCCI engines

    OpenAIRE

    Ma, He; Xu, Hongming; Wang, Jihong; Schnier, Thorsten; Neaves, Ben; Tan, Cheng; Wang, Zhi

    2014-01-01

    Modern engines feature a considerable number of adjustable control parameters. With this increasing number of Degrees of Freedom (DoF) for engines, and the consequent considerable calibration effort required to optimize engine performance, traditional manual engine calibration or optimization methods are reaching their limits. An automated engine optimization approach is desired. In this paper, a self-learning evolutionary algorithm based multi-objective globally optimization approach for a H...

  14. Algorithms for Model Calibration of Ground Water Simulators

    Science.gov (United States)

    2014-11-20

    cobian, and Jacobian-vector products are computed with a Monte Carlo simulation. This situation differs from the textbook case [5] in that one does not...Anderson acceleration is a natural method for multi-physics coupling (for example subsurface flow, chemistry , and heat transfer) when the individual physics...Online publication 7/12/2014. [11] J. Nance and C. T. Kelley, A sparse interpolation algorithm for dynamical simulations in compu- tational chemistry

  15. Algorithm Development for the Multi-Fluid Plasma Model

    Science.gov (United States)

    2011-05-30

    ities of a Hall-MHD wave increase without bound with wave number. The large wave speeds increases the stiffness of the equation system making accu- rate...illustrates the dispersive nature of the waves which makes capturing the effect difficult in MHD algorithms. The electromagnetic plasma shock serves to...Nonlinear full two-fluid study of m = 0 sausage instabilities in an axisymmetric Z pinch. Physics of Plasmas, 13(8):082310, 2006. [5] A. Hakim and U. Shumlak

  16. Evaluation of two modified Kalman gain algorithms for radar data assimilation in the WRF model

    Directory of Open Access Journals (Sweden)

    Chun Yang

    2015-05-01

    Full Text Available This work attempts to validate two modified Kalman gain algorithms by assimilating a single radar simulation data set into the Weather Research and Forecasting model using an Ensemble Square Root Filter. Emphasis is placed on the comparison of assimilation performance between the two modified algorithms against the classical Kalman gain algorithm when the measurement operator is non-linear. Three ideal storm-scale experiments, which are configured identically except for the different Kalman gain algorithms, are designed in parallel for this purpose. The results show that the first modified algorithm can result in a better simulation of a storm, as measured by the root mean square error (RMSE. The second algorithm can also, to some extent, reduce the RMSE of the simulation of some state vectors, but with little improvement of the estimation of storm intensity. Overall, our preliminary experiments indicate that the two modified Kalman gain algorithms can benefit the assimilation of complex numerical models when the measurement operators are non-linear, confirming the earlier theoretical analysis and the results of simple models. Further work is needed to evaluate the impact of the modified Kalman gain algorithms on the assimilation performance of ensemble-based methods.

  17. From Cells to Islands: An unified Model of Cellular Parallel Genetic Algorithms

    CERN Document Server

    Simoncini, David; Verel, Sébastien; Clergue, Manuel

    2008-01-01

    This paper presents the Anisotropic selection scheme for cellular Genetic Algorithms (cGA). This new scheme allows to enhance diversity and to control the selective pressure which are two important issues in Genetic Algorithms, especially when trying to solve difficult optimization problems. Varying the anisotropic degree of selection allows swapping from a cellular to an island model of parallel genetic algorithm. Measures of performances and diversity have been performed on one well-known problem: the Quadratic Assignment Problem which is known to be difficult to optimize. Experiences show that, tuning the anisotropic degree, we can find the accurate trade-off between cGA and island models to optimize performances of parallel evolutionary algorithms. This trade-off can be interpreted as the suitable degree of migration among subpopulations in a parallel Genetic Algorithm.

  18. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  19. Combining Diffusion and Grey Models Based on Evolutionary Optimization Algorithms to Forecast Motherboard Shipments

    Directory of Open Access Journals (Sweden)

    Fu-Kwun Wang

    2012-01-01

    Full Text Available It is important for executives to predict the future trends. Otherwise, their companies cannot make profitable decisions and investments. The Bass diffusion model can describe the empirical adoption curve for new products and technological innovations. The Grey model provides short-term forecasts using four data points. This study develops a combined model based on the rolling Grey model (RGM and the Bass diffusion model to forecast motherboard shipments. In addition, we investigate evolutionary optimization algorithms to determine the optimal parameters. Our results indicate that the combined model using a hybrid algorithm outperforms other methods for the fitting and forecasting processes in terms of mean absolute percentage error.

  20. Važiavimo dviračiu kaip aktyvios laisvalaikio formos nauda sveikatinimui: Kauno miesto gyventojų atvejis

    OpenAIRE

    Solnyškinaitė, Neringa

    2014-01-01

    Apklaustieji teigiamai vertina važiavimą dviračiu kaip aktyvaus laisvalaikio formą, labiausiai džiaugiasi pagerėjusia emocine būsena ir teigia, kad dviratis puiki transporto priemonė, nes yra sutaupoma pinigų už transportą. Respondentai tai pat pastebėjo tokius pokyčius kaip padidėjusią raumeninę masę, sumažėjusį svorį, sustiprėjusią širdį ir kaulus. Galima daryti išvadą, kad anot respondentų važiavimas dviračiu labiausiai gerina emocinę būseną, galbūt jiems pavyko atsikratyti slogių minčių i...

  1. W-F STRUCTURE: A NEW ALGORITHM ON WIRE-FRAME MODELING

    Institute of Scientific and Technical Information of China (English)

    ZengGang; WangChanglu

    1996-01-01

    This paper advances a new algorithm oriented to geometry modeling(GM) by using frame model. The elemental data structure of frame model is the vertex. This algorithm provides a general and rapid method. By this way,we can link the vertices to construct the elemental frame and need not consider the topological relation among vertices which consist of the concrete entity. Then, we can combine the elemental frames to complete frame modeling by using aided-line method referring to concrete entity. We will discuss two keystones in the paper. Then we give a 3D geometry modeling example based on wire-frame model using the new algorithm.Key words CAD, computer graphics, frame model, modeling system

  2. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, D.

    2017-04-05

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.

  3. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    Science.gov (United States)

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  4. Fast and Parallel Spectral Transform Algorithms for Global Shallow Water Models

    Science.gov (United States)

    Jakob, Ruediger

    1993-01-01

    This dissertation examines spectral transform algorithms for the solution of the shallow water equations on the sphere and studies their implementation and performance on shared memory vector multiprocessors. Beginning with the standard spectral transform algorithm in vorticity divergence form and its implementation in the Fortran based parallel programming language Force, two modifications are researched. First, the transforms and matrices associated with the meridional derivatives of the associated Legendre functions are replaced by corresponding operations with the spherical harmonic coefficients. Second, based on the fast Fourier transform and the fast multipole method, a lower complexity algorithm is derived that uses fast transformations between Legendre and interior Fourier nodes, fast surface spherical truncation and a fast spherical Helmholtz solver. The first modification is fully implemented, and comparative performance data are obtained for varying resolution and number of processes, showing a significant storage saving and slightly reduced execution time on a Cray Y -MP 8/864. The important performance parameters for the spectral transform algorithm and its implementation on vector multiprocessors are determined and validated with the measured performance data. The second modification is described at the algorithmic level, but only the novel fast surface spherical truncation algorithm is implemented. This new multipole algorithm has lower complexity than the standard algorithm, and requires asymptotically only order N ^2log N operations per time step for a grid with order N^2 points. Because the global shallow water equations are similar to the horizontal dynamical component of general circulation models, the results can be applied to spectral transform numerical weather prediction and climate models. In general, the derived algorithms may speed up the solution of time dependent partial differential equations in spherical geometry. A performance model

  5. Belief Bisimulation for Hidden Markov Models Logical Characterisation and Decision Algorithm

    DEFF Research Database (Denmark)

    Jansen, David N.; Nielson, Flemming; Zhang, Lijun

    2012-01-01

    This paper establishes connections between logical equivalences and bisimulation relations for hidden Markov models (HMM). Both standard and belief state bisimulations are considered. We also present decision algorithms for the bisimilarities. For standard bisimilarity, an extension of the usual...

  6. Effects of activity and energy budget balancing algorithm on laboratory performance of a fish bioenergetics model

    Science.gov (United States)

    Madenjian, Charles P.; David, Solomon R.; Pothoven, Steven A.

    2012-01-01

    We evaluated the performance of the Wisconsin bioenergetics model for lake trout Salvelinus namaycush that were fed ad libitum in laboratory tanks under regimes of low activity and high activity. In addition, we compared model performance under two different model algorithms: (1) balancing the lake trout energy budget on day t based on lake trout energy density on day t and (2) balancing the lake trout energy budget on day t based on lake trout energy density on day t + 1. Results indicated that the model significantly underestimated consumption for both inactive and active lake trout when algorithm 1 was used and that the degree of underestimation was similar for the two activity levels. In contrast, model performance substantially improved when using algorithm 2, as no detectable bias was found in model predictions of consumption for inactive fish and only a slight degree of overestimation was detected for active fish. The energy budget was accurately balanced by using algorithm 2 but not by using algorithm 1. Based on the results of this study, we recommend the use of algorithm 2 to estimate food consumption by fish in the field. Our study results highlight the importance of accurately accounting for changes in fish energy density when balancing the energy budget; furthermore, these results have implications for the science of evaluating fish bioenergetics model performance and for more accurate estimation of food consumption by fish in the field when fish energy density undergoes relatively rapid changes.

  7. Stochastic gradient algorithm for a dual-rate Box-Jenkins model based on auxiliary model and FIR model

    Institute of Scientific and Technical Information of China (English)

    Jing CHEN; Rui-feng DING

    2014-01-01

    Based on the work in Ding and Ding (2008), we develop a modifi ed stochastic gradient (SG) parameter estimation algorithm for a dual-rate Box-Jenkins model by using an auxiliary model. We simplify the complex dual-rate Box-Jenkins model to two fi nite impulse response (FIR) models, present an auxiliary model to estimate the missing outputs and the unknown noise variables, and compute all the unknown parameters of the system with colored noises. Simulation results indicate that the proposed method is effective.

  8. State-space models - from the EM algorithm to a gradient approach

    DEFF Research Database (Denmark)

    Olsson, Rasmus Kongsgaard; Petersen, Kaare Brandt; Lehn-Schiøler, Tue

    2007-01-01

    Slow convergence is observed in the EM algorithm for linear state-space models. We propose to circumvent the problem by applying any off-the-shelf quasi-Newton-type optimizer, which operates on the gradient of the log-likelihood function. Such an algorithm is a practical alternative due to the fact...... that the exact gradient of the log-likelihood function can be computed by recycling components of the expectation-maximization (EM) algorithm. We demonstrate the efficiency of the proposed method in three relevant instances of the linear state-space model. In high signal-to-noise ratios, where EM is particularly...

  9. Empirical relations between static and dynamic exponents for Ising model cluster algorithms

    Science.gov (United States)

    Coddington, Paul D.; Baillie, Clive F.

    1992-02-01

    We have measured the autocorrelations for the Swendsen-Wang and the Wolff cluster update algorithms for the Ising model in two, three, and four dimensions. The data for the Wolff algorithm suggest that the autocorrelations are linearly related to the specific heat, in which case the dynamic critical exponent is zint,EW=α/ν. For the Swendsen-Wang algorithm, scaling the autocorrelations by the average maximum cluster size gives either a constant or a logarithm, which implies that zint,ESW=β/ν for the Ising model.

  10. Dynamic Critical Behaviour of Wolff's Algorithm for $RP^N$ $\\sigma$-Models

    OpenAIRE

    Caracciolo, S.; Edwards, R. G.; Pelissetto, A.; Sokal, A. D.

    1992-01-01

    We study the performance of a Wolff-type embedding algorithm for $RP^N$ $\\sigma$-models. We find that the algorithm in which we update the embedded Ising model \\`a la Swendsen-Wang has critical slowing-down as $z_\\chi \\approx 1$. If instead we update the Ising spins with a perfect algorithm which at every iteration produces a new independent configuration, we obtain $z_\\chi \\approx 0$. This shows that the Ising embedding encodes well the collective modes of the system, and that the behaviour ...

  11. Empirical relations between static and dynamic exponents for Ising model cluster algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Coddington, P.D. (Department of Physics, Syracuse University, Syracuse, New York 13244 (United States)); Baillie, C.F. (Department of Physics, University of Colorado, Boulder, Colorado 80309 (United States))

    1992-02-17

    We have measured the autocorrelations for the Swendsen-Wang and the Wolff cluster update algorithms for the Ising model in two, three, and four dimensions. The data for the Wolff algorithm suggest that the autocorrelations are linearly related to the specific heat, in which case the dynamic critical exponent is {ital z}{sub int,}{ital E}{sup W}={alpha}/{nu}. For the Swendsen-Wang algorithm, scaling the autocorrelations by the average maximum cluster size gives either a constant or a logarithm, which implies that {ital z}{sub int,}{ital E}{sup SW}={beta}/{nu} for the Ising model.

  12. Polynomial algorithm for exact calculation of partition function for binary spin model on planar graphs

    CERN Document Server

    Karandashev, Yakov M

    2016-01-01

    In this paper we propose and realize (the code is publicly available at https://github.com/Thrawn1985/2D-Partition-Function) an algorithm for exact calculation of partition function for planar graph models with binary spins. The complexity of the algorithm is O(N^2). Test experiments shows good agreement with Onsager's analytical solution for two-dimensional Ising model of infinite size.

  13. Worm algorithms for the 3-state Potts model with magnetic field and chemical potential

    CERN Document Server

    Delgado, Ydalia; Gattringer, Christof

    2012-01-01

    We discuss worm algorithms for the 3-state Potts model with external field and chemical potential. The complex phase problem of this system can be overcome by using a flux representation where the new degrees of freedom are dimer and monomer variables. Working with this representation we discuss two different generalizations of the conventional Prokof'ev-Svistunov algorithm suitable for Monte Carlo simulations of the model at arbitrary chemical potential and evaluate their performance.

  14. A self-organizing algorithm for modeling protein loops.

    Directory of Open Access Journals (Sweden)

    Pu Liu

    2009-08-01

    Full Text Available Protein loops, the flexible short segments connecting two stable secondary structural units in proteins, play a critical role in protein structure and function. Constructing chemically sensible conformations of protein loops that seamlessly bridge the gap between the anchor points without introducing any steric collisions remains an open challenge. A variety of algorithms have been developed to tackle the loop closure problem, ranging from inverse kinematics to knowledge-based approaches that utilize pre-existing fragments extracted from known protein structures. However, many of these approaches focus on the generation of conformations that mainly satisfy the fixed end point condition, leaving the steric constraints to be resolved in subsequent post-processing steps. In the present work, we describe a simple solution that simultaneously satisfies not only the end point and steric conditions, but also chirality and planarity constraints. Starting from random initial atomic coordinates, each individual conformation is generated independently by using a simple alternating scheme of pairwise distance adjustments of randomly chosen atoms, followed by fast geometric matching of the conformationally rigid components of the constituent amino acids. The method is conceptually simple, numerically stable and computationally efficient. Very importantly, additional constraints, such as those derived from NMR experiments, hydrogen bonds or salt bridges, can be incorporated into the algorithm in a straightforward and inexpensive way, making the method ideal for solving more complex multi-loop problems. The remarkable performance and robustness of the algorithm are demonstrated on a set of protein loops of length 4, 8, and 12 that have been used in previous studies.

  15. Genetic Algorithms for a Parameter Estimation of a Fermentation Process Model: A Comparison

    Directory of Open Access Journals (Sweden)

    Olympia Roeva

    2005-12-01

    Full Text Available In this paper the problem of a parameter estimation using genetic algorithms is examined. A case study considering the estimation of 6 parameters of a nonlinear dynamic model of E. coli fermentation is presented as a test problem. The parameter estimation problem is stated as a nonlinear programming problem subject to nonlinear differential-algebraic constraints. This problem is known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based local optimization methods fail to arrive satisfied solutions. To overcome their limitations, the use of different genetic algorithms as stochastic global optimization methods is explored. These algorithms are proved to be very suitable for the optimization of highly non-linear problems with many variables. Genetic algorithms can guarantee global optimality and robustness. These facts make them advantageous in use for parameter identification of fermentation models. A comparison between simple, modified and multi-population genetic algorithms is presented. The best result is obtained using the modified genetic algorithm. The considered algorithms converged very closely to the cost value but the modified algorithm is in times faster than other two.

  16. Query Optimization Using Genetic Algorithms in the Vector Space Model

    CERN Document Server

    Mashagba, Eman Al; Nassar, Mohammad Othman

    2011-01-01

    In information retrieval research; Genetic Algorithms (GA) can be used to find global solutions in many difficult problems. This study used different similarity measures (Dice, Inner Product) in the VSM, for each similarity measure we compared ten different GA approaches based on different fitness functions, different mutations and different crossover strategies to find the best strategy and fitness function that can be used when the data collection is the Arabic language. Our results shows that the GA approach which uses one-point crossover operator, point mutation and Inner Product similarity as a fitness function is the best IR system in VSM.

  17. A face recognition algorithm based on multiple individual discriminative models

    DEFF Research Database (Denmark)

    Fagertun, Jens; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2005-01-01

    Abstract—In this paper, a novel algorithm for facial recognition is proposed. The technique combines the color texture and geometrical configuration provided by face images. Landmarks and pixel intensities are used by Principal Component Analysis and Fisher Linear Discriminant Analysis to associate...... facial image corresponds to a person in the database. Each projection is also able to visualizing the most discriminative facial features of the person associated to the projection. The performance of the proposed method is tested in two experiments. Results point out the proposed technique...... as an accurate and robust tool for facial identification and unknown detection....

  18. Tools and Algorithms to Link Horizontal Hydrologic and Vertical Hydrodynamic Models and Provide a Stochastic Modeling Framework

    Science.gov (United States)

    Salah, Ahmad M.; Nelson, E. James; Williams, Gustavious P.

    2010-04-01

    We present algorithms and tools we developed to automatically link an overland flow model to a hydrodynamic water quality model with different spatial and temporal discretizations. These tools run the linked models which provide a stochastic simulation frame. We also briefly present the tools and algorithms we developed to facilitate and analyze stochastic simulations of the linked models. We demonstrate the algorithms by linking the Gridded Surface Subsurface Hydrologic Analysis (GSSHA) model for overland flow with the CE-QUAL-W2 model for water quality and reservoir hydrodynamics. GSSHA uses a two-dimensional horizontal grid while CE-QUAL-W2 uses a two-dimensional vertical grid. We implemented the algorithms and tools in the Watershed Modeling System (WMS) which allows modelers to easily create and use models. The algorithms are general and could be used for other models. Our tools create and analyze stochastic simulations to help understand uncertainty in the model application. While a number of examples of linked models exist, the ability to perform automatic, unassisted linking is a step forward and provides the framework to easily implement stochastic modeling studies.

  19. Tools and Algorithms to Link Horizontal Hydrologic and Vertical Hydrodynamic Models and Provide a Stochastic Modeling Framework

    Directory of Open Access Journals (Sweden)

    Ahmad M Salah

    2010-12-01

    Full Text Available We present algorithms and tools we developed to automatically link an overland flow model to a hydrodynamic water quality model with different spatial and temporal discretizations. These tools run the linked models which provide a stochastic simulation frame. We also briefly present the tools and algorithms we developed to facilitate and analyze stochastic simulations of the linked models. We demonstrate the algorithms by linking the Gridded Surface Subsurface Hydrologic Analysis (GSSHA model for overland flow with the CE-QUAL-W2 model for water quality and reservoir hydrodynamics. GSSHA uses a two-dimensional horizontal grid while CE-QUAL-W2 uses a two-dimensional vertical grid. We implemented the algorithms and tools in the Watershed Modeling System (WMS which allows modelers to easily create and use models. The algorithms are general and could be used for other models. Our tools create and analyze stochastic simulations to help understand uncertainty in the model application. While a number of examples of linked models exist, the ability to perform automatic, unassisted linking is a step forward and provides the framework to easily implement stochastic modeling studies.

  20. Comparison of most adaptive meta model With newly created Quality Meta-Model using CART Algorithm

    Directory of Open Access Journals (Sweden)

    Jasbir Malik

    2012-09-01

    Full Text Available To ensure that the software developed is of high quality, it is now widely accepted that various artifacts generated during the development process should be rigorously evaluated using domain-specific quality model. However, a domain-specific quality model should be derived from a generic quality model which is time-proven, well-validated and widely-accepted. This thesis lays down a clear definition of quality meta-model and then identifies various quality meta-models existing in the research and practice-domains. This thesis then compares the various existing quality meta-models to identify which model is the most adaptable to various domains. A set of criteria is used to compare the various quality meta-models. In this we specify the categories, as the CART Algorithms is completely a tree architecture which works on either true or false meta model decision making power .So in the process it has been compared that , if the following items has been found in one category then it falls under true section else under false section .

  1. Parallel flow accumulation algorithms for graphical processing units with application to RUSLE model

    Science.gov (United States)

    Sten, Johan; Lilja, Harri; Hyväluoma, Jari; Westerholm, Jan; Aspnäs, Mats

    2016-04-01

    Digital elevation models (DEMs) are widely used in the modeling of surface hydrology, which typically includes the determination of flow directions and flow accumulation. The use of high-resolution DEMs increases the accuracy of flow accumulation computation, but as a drawback, the computational time may become excessively long if large areas are analyzed. In this paper we investigate the use of graphical processing units (GPUs) for efficient flow accumulation calculations. We present two new parallel flow accumulation algorithms based on dependency transfer and topological sorting and compare them to previously published flow transfer and indegree-based algorithms. We benchmark the GPU implementations against industry standards, ArcGIS and SAGA. With the flow-transfer D8 flow routing model and binary input data, a speed up of 19 is achieved compared to ArcGIS and 15 compared to SAGA. We show that on GPUs the topological sort-based flow accumulation algorithm leads on average to a speedup by a factor of 7 over the flow-transfer algorithm. Thus a total speed up of the order of 100 is achieved. We test the algorithms by applying them to the Revised Universal Soil Loss Equation (RUSLE) erosion model. For this purpose we present parallel versions of the slope, LS factor and RUSLE algorithms and show that the RUSLE erosion results for an area of 12 km x 24 km containing 72 million cells can be calculated in less than a second. Since flow accumulation is needed in many hydrological models, the developed algorithms may find use in many other applications than RUSLE modeling. The algorithm based on topological sorting is particularly promising for dynamic hydrological models where flow accumulations are repeatedly computed over an unchanged DEM.

  2. An adaptive algorithm for the cornea modeling from keratometric data

    CERN Document Server

    Martinez-Finkelshtein, Andrei; Castro-Luna, Gracia M; Alio, Jorge L

    2010-01-01

    In this paper we describe an adaptive and multi-scale algorithm for the parsimonious fit of the corneal surface data that allows to adapt the number of functions used in the reconstruction to the conditions of each cornea. The method implements also a dynamical selection of the parameters and the management of noise. It can be used for the real-time reconstruction of both altimetric data and corneal power maps from the data collected by keratoscopes, such as the Placido rings based topographers, decisive for an early detection of corneal diseases such as keratoconus. Numerical experiments show that the algorithm exhibits a steady exponential error decay, independently of the level of aberration of the cornea. The complexity of each anisotropic gaussian basis functions in the functional representation is the same, but their parameters vary to fit the current scale. This scale is determined only by the residual errors and not by the number of the iteration. Finally, the position and clustering of their centers,...

  3. Finding model parameters: Genetic algorithms and the numerical modelling of quartz luminescence

    Energy Technology Data Exchange (ETDEWEB)

    Adamiec, Grzegorz [Department of Radioisotopes, Institute of Physics, Silesian University of Technology, ul. Krzywoustego 2, 44-100 Gliwice (Poland)]. E-mail: grzegorz.adamiec@polsl.pl; Bluszcz, Andrzej [Department of Radioisotopes, Institute of Physics, Silesian University of Technology, ul. Krzywoustego 2, 44-100 Gliwice (Poland); Bailey, Richard [Department of Geography, Royal Holloway, University of London, Egham, Surrey, TW20 0EX (United Kingdom); Garcia-Talavera, Marta [LIBRA, Centro I-D, Campus Miguel Delibes, 47011 Valladolid (Spain)

    2006-08-15

    The paper presents an application of genetic algorithms (GAs) to the problem of finding appropriate parameter values for the numerical simulation of quartz thermoluminescence (TL). We show that with the use of GAs it is possible to achieve a very good match between simulated and experimentally measured characteristics of quartz, for example the thermal activation characteristics of fired quartz. The rate equations of charge transport in the numerical model of luminescence in quartz contain a large number of parameters (trap depths, frequency factors, populations, charge capture probabilities, optical detrapping probabilities, and recombination probabilities). Given that comprehensive models consist of over 10 traps, finding model parameters proves a very difficult task. Manual parameter changes are very time consuming and allow only a limited degree of accuracy. GAs provide a semi-automatic way of finding appropriate parameters.

  4. Programming Non-Trivial Algorithms in the Measurement Based Quantum Computation Model

    Energy Technology Data Exchange (ETDEWEB)

    Alsing, Paul [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Fanto, Michael [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Lott, Capt. Gordon [United States Air Force Research Laboratory, Wright-Patterson Air Force Base; Tison, Christoper C. [United States Air Force Research Laboratory, Wright-Patterson Air Force Base

    2014-01-01

    We provide a set of prescriptions for implementing a quantum circuit model algorithm as measurement based quantum computing (MBQC) algorithm1, 2 via a large cluster state. As means of illustration we draw upon our numerical modeling experience to describe a large graph state capable of searching a logical 8 element list (a non-trivial version of Grover's algorithm3 with feedforward). We develop several prescriptions based on analytic evaluation of cluster states and graph state equations which can be generalized into any circuit model operations. Such a resulting cluster state will be able to carry out the desired operation with appropriate measurements and feed forward error correction. We also discuss the physical implementation and the analysis of the principal 3-qubit entangling gate (Toffoli) required for a non-trivial feedforward realization of an 8-element Grover search algorithm.

  5. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  6. A New 3D Wireless Directional Sensing Model and Coverage Enhancement Algorithm

    Institute of Scientific and Technical Information of China (English)

    Xiaojun Bi∗; Pengfei Diao

    2016-01-01

    Coverage control for each sensor is based on a 2D directional sensing model in directional sensor networks conventionally. But the 2D model cannot accurately characterize the real environment. In order to solve this problem, a new 3D directional sensor model and coverage enhancement algorithm is proposed. We can adjust the pitch angle and deviation angle to enhance the coverage rate. And the coverage enhancement algorithm is based on an improved gravitational search algorithm. In this paper the two improved strategies of GSA are directional mutation strategy and individual evolution strategy. A set of simulations show that our coverage enhancement algorithm has a good performance to improve the coverage rate of the wireless directional sensor network on different number of nodes, different virtual angles and different sensing radius.

  7. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    Science.gov (United States)

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  8. An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains

    Directory of Open Access Journals (Sweden)

    Qihong Duan

    2010-01-01

    Full Text Available In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there are m transient states in the system and that there are n failure time data. The devised algorithm only needs to compute the exponential of m×m upper triangular matrices for O(nm2 times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.

  9. Autotuning algorithm of particle swarm PID parameter based on D-Tent chaotic model

    Institute of Scientific and Technical Information of China (English)

    Min Zhu; Chunling Yang; Weiliang Li

    2013-01-01

    An improved particle swarm algorithm based on the D-Tent chaotic model is put forward aiming at the standard particle swarm algorithm. The convergence rate of the late of proposed al-gorithm is improved by revising the inertia weight of global optimal particles and the introduction of D-Tent chaotic sequence. Through the test of typical function and the autotuning test of proportional-integral-derivative (PID) parameter, finally a simulation is made to the servo control system of a permanent magnet synchronous motor (PMSM) under double-loop control of rotating speed and current by utilizing the chaotic particle swarm algorithm. Studies show that the proposed algorithm can reduce the iterative times and improve the convergence rate under the condition that the global optimal solution can be got.

  10. An adaptive attitude algorithm based on a current statistical model for maneuvering acceleration

    Directory of Open Access Journals (Sweden)

    Menglong Wang

    2017-02-01

    Full Text Available A current statistical model for maneuvering acceleration using an adaptive extended Kalman filter (CS-MAEKF algorithm is proposed to solve problems existing in conventional extended Kalman filters such as large estimation error and divergent tendencies in the presence of continuous maneuvering acceleration. A membership function is introduced in this algorithm to adaptively modify the upper and lower limits of loitering vehicles’ maneuvering acceleration and for real-time adjustment of maneuvering acceleration variance. This allows the algorithm to have superior static and dynamic performance for loitering vehicles undergoing different maneuvers. Digital simulations and dynamic flight testing show that the yaw angle accuracy of the algorithm is 30% better than conventional algorithms, and pitch and roll angle calculation precision is improved by 60%. The mean square deviation of heading and attitude angle error during dynamic flight is less than 3.05°. Experimental results show that CS-MAEKF meets the application requirements of miniature loitering vehicles.

  11. A genetic algorithm based global search strategy for population pharmacokinetic/pharmacodynamic model selection.

    Science.gov (United States)

    Sale, Mark; Sherer, Eric A

    2015-01-01

    The current algorithm for selecting a population pharmacokinetic/pharmacodynamic model is based on the well-established forward addition/backward elimination method. A central strength of this approach is the opportunity for a modeller to continuously examine the data and postulate new hypotheses to explain observed biases. This algorithm has served the modelling community well, but the model selection process has essentially remained unchanged for the last 30 years. During this time, more robust approaches to model selection have been made feasible by new technology and dramatic increases in computation speed. We review these methods, with emphasis on genetic algorithm approaches and discuss the role these methods may play in population pharmacokinetic/pharmacodynamic model selection.

  12. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  13. Optimizing the Forward Algorithm for Hidden Markov Model on IBM Roadrunner clusters

    Directory of Open Access Journals (Sweden)

    SOIMAN, S.-I.

    2015-05-01

    Full Text Available In this paper we present a parallel solution of the Forward Algorithm for Hidden Markov Models. The Forward algorithm compute a probability of a hidden state from Markov model at a certain time, this process being recursively. The whole process requires large computational resources for those models with a large number of states and long observation sequences. Our solution in order to reduce the computational time is a multilevel parallelization of Forward algorithm. Two types of cores were used in our implementation, for each level of parallelization, cores that are graved on the same chip of PowerXCell8i processor. This hybrid architecture of processors permitted us to obtain a speedup factor over 40 relative to the sequential algorithm for a model with 24 states and 25 millions of observable symbols. Experimental results showed that the parallel Forward algorithm can evaluate the probability of an observation sequence on a hidden Markov model 40 times faster than the classic one does. Based on the performance obtained, we demonstrate the applicability of this parallel implementation of Forward algorithm in complex problems such as large vocabulary speech recognition.

  14. Modelling Systems of Classical/Quantum Identical Particles by Focusing on Algorithms

    Science.gov (United States)

    Guastella, Ivan; Fazio, Claudio; Sperandeo-Mineo, Rosa Maria

    2012-01-01

    A procedure modelling ideal classical and quantum gases is discussed. The proposed approach is mainly based on the idea that modelling and algorithm analysis can provide a deeper understanding of particularly complex physical systems. Appropriate representations and physical models able to mimic possible pseudo-mechanisms of functioning and having…

  15. Modelling Systems of Classical/Quantum Identical Particles by Focusing on Algorithms

    Science.gov (United States)

    Guastella, Ivan; Fazio, Claudio; Sperandeo-Mineo, Rosa Maria

    2012-01-01

    A procedure modelling ideal classical and quantum gases is discussed. The proposed approach is mainly based on the idea that modelling and algorithm analysis can provide a deeper understanding of particularly complex physical systems. Appropriate representations and physical models able to mimic possible pseudo-mechanisms of functioning and having…

  16. Efficacy and tolerability of a high loading dose (25,000 IU weekly) vitamin D3 supplementation in obese children with vitamin D insufficiency/deficiency

    NARCIS (Netherlands)

    Radhakishun, Nalini N E; van Vliet, Mariska; Poland, Dennis C W; Weijer, Olivier; Beijnen, Jos H; Brandjes, Dees P M; Diamant, Michaela; von Rosenstiel, Ines A

    2014-01-01

    BACKGROUND: The recommended dose of vitamin D supplementation of 400 IU/day might be inadequate to treat obese children with vitamin D insufficiency. Therefore, we tested the efficacy and tolerability of a high loading dose vitamin D3 supplementation of 25,000 IU weekly in multiethnic obese children

  17. Efficacy and tolerability of a high loading dose (25,000 IU weekly) vitamin D3 supplementation in obese children with vitamin D insufficiency/deficiency

    NARCIS (Netherlands)

    Radhakishun, Nalini N E; van Vliet, Mariska; Poland, Dennis C W; Weijer, Olivier; Beijnen, Jos H; Brandjes, Dees P M; Diamant, Michaela; von Rosenstiel, Ines A

    2014-01-01

    BACKGROUND: The recommended dose of vitamin D supplementation of 400 IU/day might be inadequate to treat obese children with vitamin D insufficiency. Therefore, we tested the efficacy and tolerability of a high loading dose vitamin D3 supplementation of 25,000 IU weekly in multiethnic obese

  18. Melanoma prognostic model using tissue microarrays and genetic algorithms.

    Science.gov (United States)

    Gould Rothberg, Bonnie E; Berger, Aaron J; Molinaro, Annette M; Subtil, Antonio; Krauthammer, Michael O; Camp, Robert L; Bradley, William R; Ariyan, Stephan; Kluger, Harriet M; Rimm, David L

    2009-12-01

    As a result of the questionable risk-to-benefit ratio of adjuvant therapies, stage II melanoma is currently managed by observation because available clinicopathologic parameters cannot identify the 20% to 60% of such patients likely to develop metastatic disease. Here, we propose a multimarker molecular prognostic assay that can help triage patients at increased risk of recurrence. Protein expression for 38 candidates relevant to melanoma oncogenesis was evaluated using the automated quantitative analysis (AQUA) method for immunofluorescence-based immunohistochemistry in formalin-fixed, paraffin-embedded specimens from a cohort of 192 primary melanomas collected during 1959 to 1994. The prognostic assay was built using a genetic algorithm and validated on an independent cohort of 246 serial primary melanomas collected from 1997 to 2004. Multiple iterations of the genetic algorithm yielded a consistent five-marker solution. A favorable prognosis was predicted by ATF2 ln(non-nuclear/nuclear AQUA score ratio) of more than -0.052, p21(WAF1) nuclear compartment AQUA score of more than 12.98, p16(INK4A) ln(non-nuclear/nuclear AQUA score ratio) of < or = -0.083, beta-catenin total AQUA score of more than 38.68, and fibronectin total AQUA score of < or = 57.93. Primary tumors that met at least four of these five conditions were considered a low-risk group, and those that met three or fewer conditions formed a high-risk group (log-rank P < .0001). Multivariable proportional hazards analysis adjusting for clinicopathologic parameters shows that the high-risk group has significantly reduced survival on both the discovery (hazard ratio = 2.84; 95% CI, 1.46 to 5.49; P = .002) and validation (hazard ratio = 2.72; 95% CI, 1.12 to 6.58; P = .027) cohorts. This multimarker prognostic assay, an independent determinant of melanoma survival, might be beneficial in improving the selection of stage II patients for adjuvant therapy.

  19. An algorithm of multi-model spatial overlay based on three-dimensional terrain model TIN and its application

    Institute of Scientific and Technical Information of China (English)

    王少安; 张子平; 龚健雅

    2001-01-01

    3D-GIS spatial overlay analysis is being broadly concerned about in international academe and is a research focus. It is one of the important functions of spatial analysis using GIS technology. An algorithm of multi-model spatial overlay based on three-dimensional terrain model TIN is introduced in this paper which can be used to solve the TIN-based thrcc-dimensional overlay operation in spatial analysis. The feasibility arid validity of this algorithm is identified. This algorithm is used successfully in three-dimensional overlay and region variation overlay analysis.

  20. An algorithm of multi-model spatial overlay based on three-dimensional terrain model TIN and its application

    Institute of Scientific and Technical Information of China (English)

    WANG Shao-an; ZHANG Zi-ping; GONG Jian-ya

    2001-01-01

    3D-GIS spatial overlay analysis is being broadly concerned about in in ternational academe and is a research focus. It is one of the important function s of spatial analysis using GIS technology. An algorithm of multi-model spatial overlay based on three-dimensional terrain model TIN is introduced in this pape r which can be used to solve the TIN-based three-dimensional overlay operation in spatial analysis. The feasibility and validity of this algorithm is identified. This algorithm is used successfully in three-dimensional overlay and region va riation overlay analysis.

  1. Optimized vehicle scheduling and filling model based on effective space and integrated solving algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHAO Peng; MU Xin; YAO Jin-hua; WANG Yong; YANG Xiu-tai

    2007-01-01

    We established an integrated and optimized model of vehicle scheduling problem and vehicle filling problem for solving an extremely complex delivery mode-multi-type vehicles, non-full loads, pickup and delivery in logistics and delivery system. The integrated and optimized model is based on our previous research result-effective space method. An integrated algorithm suitable for the integrated and optimized model was proposed and corresponding computer programs were designed to solve practical problems. The results indicates the programs can work out optimized delivery routes and concrete loading projects. The model and algorithm have many virtues and are valuable in practice.

  2. Forward and backward models for fault diagnosis based on parallel genetic algorithms

    Institute of Scientific and Technical Information of China (English)

    Yi LIU; Ying LI; Yi-jia CAO; Chuang-xin GUO

    2008-01-01

    In this paper, a mathematical model consisting of forward and backward models is built on parallel genetic algorithms (PGAs) for fault diagnosis in a transmission power system. A new method to reduce the scale of fault sections is developed in the forward model and the message passing interface (MPI) approach is chosen to parallel the genetic algorithms by global sin-gle-population master-slave method (GPGAs). The proposed approach is applied to a sample system consisting of 28 sections, 84 protective relays and 40 circuit breakers. Simulation results show that the new model based on GPGAs can achieve very fast computation in online applications of large-scale power systems.

  3. The PX-EM algorithm for fast stable fitting of Henderson's mixed model

    Directory of Open Access Journals (Sweden)

    Van Dyk David A

    2000-03-01

    Full Text Available Abstract This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence are obtained for PX-EM relative to the basic EM algorithm in the random regression.

  4. A study on the application of topic models to motif finding algorithms.

    Science.gov (United States)

    Basha Gutierrez, Josep; Nakai, Kenta

    2016-12-22

    Topic models are statistical algorithms which try to discover the structure of a set of documents according to the abstract topics contained in them. Here we try to apply this approach to the discovery of the structure of the transcription factor binding sites (TFBS) contained in a set of biological sequences, which is a fundamental problem in molecular biology research for the understanding of transcriptional regulation. Here we present two methods that make use of topic models for motif finding. First, we developed an algorithm in which first a set of biological sequences are treated as text documents, and the k-mers contained in them as words, to then build a correlated topic model (CTM) and iteratively reduce its perplexity. We also used the perplexity measurement of CTMs to improve our previous algorithm based on a genetic algorithm and several statistical coefficients. The algorithms were tested with 56 data sets from four different species and compared to 14 other methods by the use of several coefficients both at nucleotide and site level. The results of our first approach showed a performance comparable to the other methods studied, especially at site level and in sensitivity scores, in which it scored better than any of the 14 existing tools. In the case of our previous algorithm, the new approach with the addition of the perplexity measurement clearly outperformed all of the other methods in sensitivity, both at nucleotide and site level, and in overall performance at site level. The statistics obtained show that the performance of a motif finding method based on the use of a CTM is satisfying enough to conclude that the application of topic models is a valid method for developing motif finding algorithms. Moreover, the addition of topic models to a previously developed method dramatically increased its performance, suggesting that this combined algorithm can be a useful tool to successfully predict motifs in different kinds of sets of DNA sequences.

  5. Development of a multi-objective optimization algorithm using surrogate models for coastal aquifer management

    Science.gov (United States)

    Kourakos, George; Mantoglou, Aristotelis

    2013-02-01

    SummaryThe demand for fresh water in coastal areas and islands can be very high due to increased local needs and tourism. A multi-objective optimization methodology is developed, involving minimization of economic and environmental costs while satisfying water demand. The methodology considers desalinization of pumped water and injection of treated water into the aquifer. Variable density aquifer models are computationally intractable when integrated in optimization algorithms. In order to alleviate this problem, a multi-objective optimization algorithm is developed combining surrogate models based on Modular Neural Networks [MOSA(MNNs)]. The surrogate models are trained adaptively during optimization based on a genetic algorithm. In the crossover step, each pair of parents generates a pool of offspring which are evaluated using the fast surrogate model. Then, the most promising offspring are evaluated using the exact numerical model. This procedure eliminates errors in Pareto solution due to imprecise predictions of the surrogate model. The method has important advancements compared to previous methods such as precise evaluation of the Pareto set and alleviation of propagation of errors due to surrogate model approximations. The method is applied to an aquifer in the Greek island of Santorini. The results show that the new MOSA(MNN) algorithm offers significant reduction in computational time compared to previous methods (in the case study it requires only 5% of the time required by other methods). Further, the Pareto solution is better than the solution obtained by alternative algorithms.

  6. Convergence analysis of the alternating RGLS algorithm for the identification of the reduced complexity Volterra model.

    Science.gov (United States)

    Laamiri, Imen; Khouaja, Anis; Messaoud, Hassani

    2015-03-01

    In this paper we provide a convergence analysis of the alternating RGLS (Recursive Generalized Least Square) algorithm used for the identification of the reduced complexity Volterra model describing stochastic non-linear systems. The reduced Volterra model used is the 3rd order SVD-PARAFC-Volterra model provided using the Singular Value Decomposition (SVD) and the Parallel Factor (PARAFAC) tensor decomposition of the quadratic and the cubic kernels respectively of the classical Volterra model. The Alternating RGLS (ARGLS) algorithm consists on the execution of the classical RGLS algorithm in alternating way. The ARGLS convergence was proved using the Ordinary Differential Equation (ODE) method. It is noted that the algorithm convergence canno׳t be ensured when the disturbance acting on the system to be identified has specific features. The ARGLS algorithm is tested in simulations on a numerical example by satisfying the determined convergence conditions. To raise the elegies of the proposed algorithm, we proceed to its comparison with the classical Alternating Recursive Least Squares (ARLS) presented in the literature. The comparison has been built on a non-linear satellite channel and a benchmark system CSTR (Continuous Stirred Tank Reactor). Moreover the efficiency of the proposed identification approach is proved on an experimental Communicating Two Tank system (CTTS).

  7. An Evolutionary Algorithm for Multiobjective Fuzzy Portfolio Selection Models with Transaction Cost and Liquidity

    Directory of Open Access Journals (Sweden)

    Wei Yue

    2015-01-01

    Full Text Available The major issues for mean-variance-skewness models are the errors in estimations that cause corner solutions and low diversity in the portfolio. In this paper, a multiobjective fuzzy portfolio selection model with transaction cost and liquidity is proposed to maintain the diversity of portfolio. In addition, we have designed a multiobjective evolutionary algorithm based on decomposition of the objective space to maintain the diversity of obtained solutions. The algorithm is used to obtain a set of Pareto-optimal portfolios with good diversity and convergence. To demonstrate the effectiveness of the proposed model and algorithm, the performance of the proposed algorithm is compared with the classic MOEA/D and NSGA-II through some numerical examples based on the data of the Shanghai Stock Exchange Market. Simulation results show that our proposed algorithm is able to obtain better diversity and more evenly distributed Pareto front than the other two algorithms and the proposed model can maintain quite well the diversity of portfolio. The purpose of this paper is to deal with portfolio problems in the weighted possibilistic mean-variance-skewness (MVS and possibilistic mean-variance-skewness-entropy (MVS-E frameworks with transaction cost and liquidity and to provide different Pareto-optimal investment strategies as diversified as possible for investors at a time, rather than one strategy for investors at a time.

  8. Clustering dynamic textures with the hierarchical em algorithm for modeling video.

    Science.gov (United States)

    Mumtaz, Adeel; Coviello, Emanuele; Lanckriet, Gert R G; Chan, Antoni B

    2013-07-01

    Dynamic texture (DT) is a probabilistic generative model, defined over space and time, that represents a video as the output of a linear dynamical system (LDS). The DT model has been applied to a wide variety of computer vision problems, such as motion segmentation, motion classification, and video registration. In this paper, we derive a new algorithm for clustering DT models that is based on the hierarchical EM algorithm. The proposed clustering algorithm is capable of both clustering DTs and learning novel DT cluster centers that are representative of the cluster members in a manner that is consistent with the underlying generative probabilistic model of the DT. We also derive an efficient recursive algorithm for sensitivity analysis of the discrete-time Kalman smoothing filter, which is used as the basis for computing expectations in the E-step of the HEM algorithm. Finally, we demonstrate the efficacy of the clustering algorithm on several applications in motion analysis, including hierarchical motion clustering, semantic motion annotation, and learning bag-of-systems (BoS) codebooks for dynamic texture recognition.

  9. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    Science.gov (United States)

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  10. A robust and rapid algorithm for generating and transmitting multi-resolution three-dimensional models

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Recent advances in 3D spatial data capture, such as high resolution satellite images and laser scanning as well as corresponding data processing and modeling technologies, have led to the generation of large amounts of datasets on terrains, buildings, roads and other features. The rapid transmission and visualization of 3D models has become a 'bottleneck' of internet-based applications. This paper proposes a robust algorithm to generate multi-resolution models for rapid visualization and network transmission of 3D models. Experiments were undertaken to evaluate the performance of the proposed algorithm. Experimental results demonstrate that the proposed algorithm achieves good performance in terms of running speed, accuracy, encoding of multi-resolution models, and network transmission.

  11. A variational surface hopping algorithm for the sub-Ohmic spin-boson model

    CERN Document Server

    Yao, Yao

    2013-01-01

    The Davydov D1 ansatz, which assigns an individual bosonic trajectory to each spin state, is an efficient, yet extremely accurate trial state for time-dependent variation of the the sub-Ohmic spin-boson model [J. Chem. Phys. 138, 084111 (2013)]. A surface hopping algorithm is developed employing the Davydov D1 ansatz to study the spin dynamics with a sub-Ohmic bosonic bath. The algorithm takes into account both coherent and incoherent dynamics of the population evolution in a unified manner, and compared with semiclassical surface hopping algorithms, hopping rates calculated in this work follow more closely the Marcus formula.

  12. A Branch and Bound Algorithm for the Protein Folding Problem in the HP Lattice Model

    Institute of Scientific and Technical Information of China (English)

    Mao Chen; Wen-Qi Huang

    2005-01-01

    A branch and bound algorithm is proposed for the two-dimensional protein folding problem in the HP lattice model. In this algorithm, the benefit of each possible location of hydrophobic monomers is evaluated and only promising nodes are kept for further branching at each level. The proposed algorithm is compared with other well-known methods for 10 benchmark sequences with lengths ranging from 20 to 100 monomers. The results indicate that our method is a very efficient and promising tool for the protein folding problem.

  13. Rejection-free Monte Carlo algorithms for models with continuous degrees of freedom.

    Science.gov (United States)

    Muñoz, J D; Novotny, M A; Mitchell, S J

    2003-02-01

    We construct a rejection-free Monte Carlo algorithm for a system with continuous degrees of freedom. We illustrate the algorithm by applying it to the classical three-dimensional Heisenberg model with canonical Metropolis dynamics. We obtain the lifetime of the metastable state following a reversal of the external magnetic field. Our rejection-free algorithm obtains results in agreement with a direct implementation of the Metropolis dynamic and requires orders of magnitude less computational time at low temperatures. The treatment is general and can be extended to other dynamics and other systems with continuous degrees of freedom.

  14. Assessing the Graphical and Algorithmic Structure of Hierarchical Coloured Petri Net Models

    Directory of Open Access Journals (Sweden)

    George Benwell

    1994-11-01

    Full Text Available Petri nets, as a modelling formalism, are utilised for the analysis of processes, whether for explicit understanding, database design or business process re-engineering. The formalism, however, can be represented on a virtual continuum from highly graphical to largely algorithmic. The use and understanding of the formalism will, in part, therefore depend on the resultant complexity and power of the representation and, on the graphical or algorithmic preference of the user. This paper develops a metric which will indicate the graphical or algorithmic tendency of hierarchical coloured Petri nets.

  15. Interface tension of the 3d 4-state Potts model using the Wang-Landau algorithm

    CERN Document Server

    Hietanen, A

    2011-01-01

    We study the interface tension of the 4-state Potts model in three dimensions using the Wang- Landau algorithm. The interface tension is given by the ratio of the partition function with a twisted boundary condition in one direction and periodic boundary conditions in all other directions over the partition function with periodic boundary conditions in all directions. With the Wang-Landau algorithm we can explicitly calculate both partition functions and obtain the result for all temperatures. We find solid numerical evidence for perfect wetting. Our algorithm is tested by calculating thermodynamic quantities at the phase transition point.

  16. Loop algorithm for classical Heisenberg models with spin-ice type degeneracy

    Science.gov (United States)

    Shinaoka, Hiroshi; Motome, Yukitoshi

    2010-10-01

    In many frustrated Ising models, a single-spin flip dynamics is frozen out at low temperatures compared to the dominant interaction energy scale because of the discrete “multiple valley” structure of degenerate ground-state manifold. This makes it difficult to study low-temperature physics of these frustrated systems by using Monte Carlo simulation with the standard single-spin flip algorithm. A typical example is the so-called spin-ice model, frustrated ferromagnets on the pyrochlore lattice. The difficulty can be avoided by a global-flip algorithm, the loop algorithm, that enables to sample over the entire discrete manifold and to investigate low-temperature properties. We extend the loop algorithm to Heisenberg spin systems with strong easy-axis anisotropy in which the ground-state manifold is continuous but still retains the spin-ice type degeneracy. We examine different ways of loop flips and compare their efficiency. The extended loop algorithm is applied to two models, a Heisenberg antiferromagnet with easy-axis anisotropy along the z axis, and a Heisenberg spin-ice model with the local ⟨111⟩ easy-axis anisotropy. For both models, we demonstrate high efficiency of our loop algorithm by revealing the low-temperature properties which were hard to access by the standard single-spin flip algorithm. For the former model, we examine the possibility of order from disorder and critically check its absence. For the latter model, we elucidate a gas-liquid-solid transition, namely, crossover or phase transition among paramagnet, spin-ice liquid, and ferromagnetically ordered ice-rule state.

  17. Adaptive Grouping Cloud Model Shuffled Frog Leaping Algorithm for Solving Continuous Optimization Problems

    Directory of Open Access Journals (Sweden)

    Haorui Liu

    2016-01-01

    Full Text Available The shuffled frog leaping algorithm (SFLA easily falls into local optimum when it solves multioptimum function optimization problem, which impacts the accuracy and convergence speed. Therefore this paper presents grouped SFLA for solving continuous optimization problems combined with the excellent characteristics of cloud model transformation between qualitative and quantitative research. The algorithm divides the definition domain into several groups and gives each group a set of frogs. Frogs of each region search in their memeplex, and in the search process the algorithm uses the “elite strategy” to update the location information of existing elite frogs through cloud model algorithm. This method narrows the searching space and it can effectively improve the situation of a local optimum; thus convergence speed and accuracy can be significantly improved. The results of computer simulation confirm this conclusion.

  18. Fuzzy Neural Network-Based Interacting Multiple Model for Multi-Node Target Tracking Algorithm

    Directory of Open Access Journals (Sweden)

    Baoliang Sun

    2016-11-01

    Full Text Available An interacting multiple model for multi-node target tracking algorithm was proposed based on a fuzzy neural network (FNN to solve the multi-node target tracking problem of wireless sensor networks (WSNs. Measured error variance was adaptively adjusted during the multiple model interacting output stage using the difference between the theoretical and estimated values of the measured error covariance matrix. The FNN fusion system was established during multi-node fusion to integrate with the target state estimated data from different nodes and consequently obtain network target state estimation. The feasibility of the algorithm was verified based on a network of nine detection nodes. Experimental results indicated that the proposed algorithm could trace the maneuvering target effectively under sensor failure and unknown system measurement errors. The proposed algorithm exhibited great practicability in the multi-node target tracking of WSNs.

  19. Adaptive Grouping Cloud Model Shuffled Frog Leaping Algorithm for Solving Continuous Optimization Problems.

    Science.gov (United States)

    Liu, Haorui; Yi, Fengyan; Yang, Heli

    2016-01-01

    The shuffled frog leaping algorithm (SFLA) easily falls into local optimum when it solves multioptimum function optimization problem, which impacts the accuracy and convergence speed. Therefore this paper presents grouped SFLA for solving continuous optimization problems combined with the excellent characteristics of cloud model transformation between qualitative and quantitative research. The algorithm divides the definition domain into several groups and gives each group a set of frogs. Frogs of each region search in their memeplex, and in the search process the algorithm uses the "elite strategy" to update the location information of existing elite frogs through cloud model algorithm. This method narrows the searching space and it can effectively improve the situation of a local optimum; thus convergence speed and accuracy can be significantly improved. The results of computer simulation confirm this conclusion.

  20. Application of a single-objective, hybrid genetic algorithm approach to pharmacokinetic model building.

    Science.gov (United States)

    Sherer, Eric A; Sale, Mark E; Pollock, Bruce G; Belani, Chandra P; Egorin, Merrill J; Ivy, Percy S; Lieberman, Jeffrey A; Manuck, Stephen B; Marder, Stephen R; Muldoon, Matthew F; Scher, Howard I; Solit, David B; Bies, Robert R

    2012-08-01

    A limitation in traditional stepwise population pharmacokinetic model building is the difficulty in handling interactions between model components. To address this issue, a method was previously introduced which couples NONMEM parameter estimation and model fitness evaluation to a single-objective, hybrid genetic algorithm for global optimization of the model structure. In this study, the generalizability of this approach for pharmacokinetic model building is evaluated by comparing (1) correct and spurious covariate relationships in a simulated dataset resulting from automated stepwise covariate modeling, Lasso methods, and single-objective hybrid genetic algorithm approaches to covariate identification and (2) information criteria values, model structures, convergence, and model parameter values resulting from manual stepwise versus single-objective, hybrid genetic algorithm approaches to model building for seven compounds. Both manual stepwise and single-objective, hybrid genetic algorithm approaches to model building were applied, blinded to the results of the other approach, for selection of the compartment structure as well as inclusion and model form of inter-individual and inter-occasion variability, residual error, and covariates from a common set of model options. For the simulated dataset, stepwise covariate modeling identified three of four true covariates and two spurious covariates; Lasso identified two of four true and 0 spurious covariates; and the single-objective, hybrid genetic algorithm identified three of four true covariates and one spurious covariate. For the clinical datasets, the Akaike information criterion was a median of 22.3 points lower (range of 470.5 point decrease to 0.1 point decrease) for the best single-objective hybrid genetic-algorithm candidate model versus the final manual stepwise model: the Akaike information criterion was lower by greater than 10 points for four compounds and differed by less than 10 points for three

  1. Models of performance of evolutionary program induction algorithms based on indicators of problem difficulty.

    Science.gov (United States)

    Graff, Mario; Poli, Riccardo; Flores, Juan J

    2013-01-01

    Modeling the behavior of algorithms is the realm of evolutionary algorithm theory. From a practitioner's point of view, theory must provide some guidelines regarding which algorithm/parameters to use in order to solve a particular problem. Unfortunately, most theoretical models of evolutionary algorithms are difficult to apply to realistic situations. However, in recent work (Graff and Poli, 2008, 2010), where we developed a method to practically estimate the performance of evolutionary program-induction algorithms (EPAs), we started addressing this issue. The method was quite general; however, it suffered from some limitations: it required the identification of a set of reference problems, it required hand picking a distance measure in each particular domain, and the resulting models were opaque, typically being linear combinations of 100 features or more. In this paper, we propose a significant improvement of this technique that overcomes the three limitations of our previous method. We achieve this through the use of a novel set of features for assessing problem difficulty for EPAs which are very general, essentially based on the notion of finite difference. To show the capabilities or our technique and to compare it with our previous performance models, we create models for the same two important classes of problems-symbolic regression on rational functions and Boolean function induction-used in our previous work. We model a variety of EPAs. The comparison showed that for the majority of the algorithms and problem classes, the new method produced much simpler and more accurate models than before. To further illustrate the practicality of the technique and its generality (beyond EPAs), we have also used it to predict the performance of both autoregressive models and EPAs on the problem of wind speed forecasting, obtaining simpler and more accurate models that outperform in all cases our previous performance models.

  2. On source models for (192)Ir HDR brachytherapy dosimetry using model based algorithms.

    Science.gov (United States)

    Pantelis, Evaggelos; Zourari, Kyveli; Zoros, Emmanouil; Lahanas, Vasileios; Karaiskos, Pantelis; Papagiannis, Panagiotis

    2016-06-07

    A source model is a prerequisite of all model based dose calculation algorithms. Besides direct simulation, the use of pre-calculated phase space files (phsp source models) and parameterized phsp source models has been proposed for Monte Carlo (MC) to promote efficiency and ease of implementation in obtaining photon energy, position and direction. In this work, a phsp file for a generic (192)Ir source design (Ballester et al 2015) is obtained from MC simulation. This is used to configure a parameterized phsp source model comprising appropriate probability density functions (PDFs) and a sampling procedure. According to phsp data analysis 15.6% of the generated photons are absorbed within the source, and 90.4% of the emergent photons are primary. The PDFs for sampling photon energy and direction relative to the source long axis, depend on the position of photon emergence. Photons emerge mainly from the cylindrical source surface with a constant probability over  ±0.1 cm from the center of the 0.35 cm long source core, and only 1.7% and 0.2% emerge from the source tip and drive wire, respectively. Based on these findings, an analytical parameterized source model is prepared for the calculation of the PDFs from data of source geometry and materials, without the need for a phsp file. The PDFs from the analytical parameterized source model are in close agreement with those employed in the parameterized phsp source model. This agreement prompted the proposal of a purely analytical source model based on isotropic emission of photons generated homogeneously within the source core with energy sampled from the (192)Ir spectrum, and the assignment of a weight according to attenuation within the source. Comparison of single source dosimetry data obtained from detailed MC simulation and the proposed analytical source model show agreement better than 2% except for points lying close to the source longitudinal axis.

  3. The algorithmic anatomy of model-based evaluation

    OpenAIRE

    Daw, Nathaniel D.; Dayan, Peter

    2014-01-01

    Despite many debates in the first half of the twentieth century, it is now largely a truism that humans and other animals build models of their environments and use them for prediction and control. However, model-based (MB) reasoning presents severe computational challenges. Alternative, computationally simpler, model-free (MF) schemes have been suggested in the reinforcement learning literature, and have afforded influential accounts of behavioural and neural data. Here, we study the realiza...

  4. Modelling and Quantitative Analysis of LTRACK–A Novel Mobility Management Algorithm

    Directory of Open Access Journals (Sweden)

    Benedek Kovács

    2006-01-01

    Full Text Available This paper discusses the improvements and parameter optimization issues of LTRACK, a recently proposed mobility management algorithm. Mathematical modelling of the algorithm and the behavior of the Mobile Node (MN are used to optimize the parameters of LTRACK. A numerical method is given to determine the optimal values of the parameters. Markov chains are used to model both the base algorithm and the so-called loop removal effect. An extended qualitative and quantitative analysis is carried out to compare LTRACK to existing handover mechanisms such as MIP, Hierarchical Mobile IP (HMIP, Dynamic Hierarchical Mobility Management Strategy (DHMIP, Telecommunication Enhanced Mobile IP (TeleMIP, Cellular IP (CIP and HAWAII. LTRACK is sensitive to network topology and MN behavior so MN movement modelling is also introduced and discussed with different topologies. The techniques presented here can not only be used to model the LTRACK algorithm, but other algorithms too. There are many discussions and calculations to support our mathematical model to prove that it is adequate in many cases. The model is valid on various network levels, scalable vertically in the ISO-OSI layers and also scales well with the number of network elements.

  5. Tuning, Diagnostics & Data Preparation for Generalized Linear Models Supervised Algorithm in Data Mining Technologies

    Directory of Open Access Journals (Sweden)

    Sachin Bhaskar

    2015-07-01

    Full Text Available Data mining techniques are the result of a long process of research and product development. Large amount of data are searched by the practice of Data Mining to find out the trends and patterns that go beyond simple analysis. For segmentation of data and also to evaluate the possibility of future events, complex mathematical algorithms are used here. Specific algorithm produces each Data Mining model. More than one algorithms are used to solve in best way by some Data Mining problems. Data Mining technologies can be used through Oracle. Generalized Linear Models (GLM Algorithm is used in Regression and Classification Oracle Data Mining functions. For linear modelling, GLM is one the popular statistical techniques. For regression and binary classification, GLM is implemented by Oracle Data Mining. Row diagnostics as well as model statistics and extensive co-efficient statistics are provided by GLM. It also supports confidence bounds.. This paper outlines and produces analysis of GLM algorithm, which will guide to understand the tuning, diagnostics & data preparation process and the importance of Regression & Classification supervised Oracle Data Mining functions and it is utilized in marketing, time series prediction, financial forecasting, overall business planning, trend analysis, environmental modelling, biomedical and drug response modelling, etc.

  6. An exponential modeling algorithm for protein structure completion by X-ray crystallography.

    Science.gov (United States)

    Shneerson, V L; Wild, D L; Saldin, D K

    2001-03-01

    An exponential modeling algorithm is developed for protein structure completion by X-ray crystallography and tested on experimental data from a 59-residue protein. An initial noisy difference Fourier map of missing residues of up to half of the protein is transformed by the algorithm into one that allows easy identification of the continuous tube of electron density associated with that polypeptide chain. The method incorporates the paradigm of phase hypothesis generation and cross validation within an automated scheme.

  7. Hybrid model based on Genetic Algorithms and SVM applied to variable selection within fruit juice classification.

    Science.gov (United States)

    Fernandez-Lozano, C; Canto, C; Gestal, M; Andrade-Garda, J M; Rabuñal, J R; Dorado, J; Pazos, A

    2013-01-01

    Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected.

  8. Critical slowing down of cluster algorithms for Ising models coupled to 2-d gravity

    Science.gov (United States)

    Bowick, Mark; Falcioni, Marco; Harris, Geoffrey; Marinari, Enzo

    1994-02-01

    We simulate single and multiple Ising models coupled to 2-d gravity using both the Swendsen-Wang and Wolff algorithms to update the spins. We study the integrated autocorrelation time and find that there is considerable critical slowing down, particularly in the magnetization. We argue that this is primarily due to the local nature of the dynamical triangulation algorithm and to the generation of a distribution of baby universes which inhibits cluster growth.

  9. Critical Slowing Down of Cluster Algorithms for Ising Models Coupled to 2-d Gravity

    CERN Document Server

    Bowick, M; Harris, G; Marinari, E

    1994-01-01

    We simulate single and multiple Ising models coupled to 2-d gravity using both the Swendsen-Wang and Wolff algorithms to update the spins. We study the integrated autocorrelation time and find that there is considerable critical slowing down, particularly in the magnetization. We argue that this is primarily due to the local nature of the dynamical triangulation algorithm and to the generation of a distribution of baby universes which inhibits cluster growth.

  10. Modeling and simulation for a new virtual-clock-based collision resolution algorithm

    Institute of Scientific and Technical Information of China (English)

    Yin rupo; Cai yunze; He xing; Zhang weidong; Xu xiaoming

    2006-01-01

    Virtual time Ethernet is a multiple access protocol proposed to provide FCFS transmission service over the predominant Ethernet bus. It incorporates a novel message-rescheduling algorithm based on the virtual clock mechanism. By manipulating virtual clocks back up over a common virtual time axis and performing timely collision resolution, the algorithm guarantees the system's queuing strictness. The protocol is particularly modeled as a finite state machine and implemented using OPNET tools. Simulation studies prove its correctness and effectiveness.

  11. Model-based fault diagnosis techniques design schemes, algorithms, and tools

    CERN Document Server

    Ding, Steven

    2008-01-01

    The objective of this book is to introduce basic model-based FDI schemes, advanced analysis and design algorithms, and the needed mathematical and control theory tools at a level for graduate students and researchers as well as for engineers. This is a textbook with extensive examples and references. Most methods are given in the form of an algorithm that enables a direct implementation in a programme. Comparisons among different methods are included when possible.

  12. Steganography Algorithm in Different Colour Model Using an Energy Adjustment Applied with Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Carvajal-Gamez

    2012-09-01

    Full Text Available When color images are processed in different color model for implementing steganographic algorithms, is important to study the quality of the host and retrieved images, since it is typically used digital filters, visibly reaching deformed images. Using a steganographic algorithm, numerical calculations performed by the computer cause errors and alterations in the test images, so we apply a proposed scaling factor depending on the number of bits of the image to adjust these errors.

  13. Steganography Algorithm in Different Colour Model Using an Energy Adjustment Applied with Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    B.E. Carvajal-Gámez

    2012-08-01

    Full Text Available When color images are processed in different color model for implementing steganographic algorithms, is important to study the quality of the host and retrieved images, since it is typically used digital filters, visibly reaching deformed images. Using a steganographic algorithm, numerical calculations performed by the computer cause errors and alterations in the test images, so we apply a proposed scaling factor depending on the number of bits of the image to adjust these errors.

  14. High TPOAb Levels (>1300 IU/mL) Indicate Multifocal PTC in Hashimoto's Thyroiditis Patients and Support Total Thyroidectomy.

    Science.gov (United States)

    Dong, Shuai; Xia, Qing; Wu, Yi-Jun

    2015-07-01

    We aimed to identify whether thyroid peroxidase antibodies (TPOAb) are indicative of multifocal papillary thyroid cancer (PTC) in Hashimoto's thyroiditis (HT) patients and may help to determine necessity for total thyroidectomy. Retrospective cohort study. Teaching hospital. A total of 808 consecutive patients with HT alone or with HT and unifocal or multifocal PTC were included. Preoperative thyroid function tests, TPOAb determination, preoperative ultrasonography, intraoperative frozen biopsy, and postoperative routine pathologic examination to confirm thyroid nodules were performed for all patients. Patients with nodules or malignancy potential on ultrasound and fine-needle aspiration cytology were included. Patients with hyperthyroidism, concomitant chronic disease, a history of other malignant tumors, or history of major diseases were excluded. All patients underwent surgery, and HT and PTC were confirmed by postoperative pathologic results. No significant differences were found in age and sex between groups (P > .05). TPOAb ≤1300 IU/mL were more prevalent in the HT + unifocal PTC group than in the other groups (99.57% vs 15.52% and 60.75%, P 1300 IU/mL were more prevalent in the HT + multifocal PTC group than in the other groups (84.48% vs 0.43% and 39.25%; P PTC group had higher percentages of patients with elevated thyroid-stimulating hormone and positive central lymph node (LN) metastasis (elevated thyroid-stimulating hormone: 8.7% vs 3.2% and 6.5%, P = .008; positive central LN metastasis: 74.57% vs 67.38% and 0%, P 1300 IU/mL) are definitive indicators of multifocal PTC in HT patients, which may support surgical treatment with total thyroidectomy. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  15. Application of Parallel Algorithms in an Air Pollution Model

    DEFF Research Database (Denmark)

    Georgiev, K.; Zlatev, Z.

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  16. Online Model Learning Algorithms for Actor-Critic Control

    NARCIS (Netherlands)

    Grondman, I.

    2015-01-01

    Classical control theory requires a model to be derived for a system, before any control design can take place. This can be a hard, time-consuming process if the system is complex. Moreover, there is no way of escaping modelling errors. As an alternative approach, there is the possibility of having

  17. Computational Modeling of Teaching and Learning through Application of Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Richard Lamb

    2015-09-01

    Full Text Available Within the mind, there are a myriad of ideas that make sense within the bounds of everyday experience, but are not reflective of how the world actually exists; this is particularly true in the domain of science. Classroom learning with teacher explanation are a bridge through which these naive understandings can be brought in line with scientific reality. The purpose of this paper is to examine how the application of a Multiobjective Evolutionary Algorithm (MOEA can work in concert with an existing computational-model to effectively model critical-thinking in the science classroom. An evolutionary algorithm is an algorithm that iteratively optimizes machine learning based computational models. The research question is, does the application of an evolutionary algorithm provide a means to optimize the Student Task and Cognition Model (STAC-M and does the optimized model sufficiently represent and predict teaching and learning outcomes in the science classroom? Within this computational study, the authors outline and simulate the effect of teaching on the ability of a “virtual” student to solve a Piagetian task. Using the Student Task and Cognition Model (STAC-M a computational model of student cognitive processing in science class developed in 2013, the authors complete a computational experiment which examines the role of cognitive retraining on student learning. Comparison of the STAC-M and the STAC-M with inclusion of the Multiobjective Evolutionary Algorithm shows greater success in solving the Piagetian science-tasks post cognitive retraining with the Multiobjective Evolutionary Algorithm. This illustrates the potential uses of cognitive and neuropsychological computational modeling in educational research. The authors also outline the limitations and assumptions of computational modeling.

  18. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks.

    Science.gov (United States)

    Wang, Ting; Ren, Zhao; Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L; Sweet, Robert A; Wang, Jieru; Chen, Wei

    2016-02-01

    Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM".

  19. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model

    Directory of Open Access Journals (Sweden)

    Chunsheng Guo

    2015-09-01

    Full Text Available Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  20. INTERACTING MULTIPLE MODEL ALGORITHM BASED ON JOINT LIKELIHOOD ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    Sun Jie; Jiang Chaoshu; Chen Zhuming; Zhang Wei

    2011-01-01

    A novel approach is proposed for the estimation of likelihood on Interacting Multiple-Model (IMM) filter.In this approach,the actual innovation,based on a mismatched model,can be formulated as sum of the theoretical innovation based on a matched model and the distance between matched and mismatched models,whose probability distributions are known.The joint likelihood of innovation sequence can be estimated by convolution of the two known probability density functions.The likelihood of tracking models can be calculated by conditional probability formula.Compared with the conventional likelihood estimation method,the proposed method improves the estimation accuracy of likelihood and robustness of IMM,especially when maneuver occurs.

  1. A Dynamic Traffic Signal Timing Model and its Algorithm for Junction of Urban Road

    DEFF Research Database (Denmark)

    Cai, Yanguang; Cai, Hao

    2012-01-01

    -time and dynamic signal control of junction. To obtain the optimal solution of the model by hybrid chaotic quantum evolutionary algorithm, the model is converted to an easily solvable form. To simplify calculation, we give the expression of the partial derivative and change rate of the objective function...... such that the implementation of the algorithm only involves function assignments and arithmetic operations and thus avoids complex operations such as integral and differential. Simulation results show that the algorithm has less remain vehicles than Webster method, higher convergence rate and convergence speed than quantum......As an important part of Intelligent Transportation System, the scientific traffic signal timing of junction can improve the efficiency of urban transport. This paper presents a novel dynamic traffic signal timing model. According to the characteristics of the model, hybrid chaotic quantum...

  2. Optimization Model and Algorithm Design for Airline Fleet Planning in a Multiairline Competitive Environment

    Directory of Open Access Journals (Sweden)

    Yu Wang

    2015-01-01

    Full Text Available This paper presents a multiobjective mathematical programming model to optimize airline fleet size and structure with consideration of several critical factors severely affecting the fleet planning process. The main purpose of this paper is to reveal how multiairline competitive behaviors impact airline fleet size and structure by enhancing the existing route-based fleet planning model with consideration of the interaction between market share and flight frequency and also by applying the concept of equilibrium optimum to design heuristic algorithm for solving the model. Through case study and comparison, the heuristic algorithm is proved to be effective. By using the algorithm presented in this paper, the fleet operational profit is significantly increased compared with the use of the existing route-based model. Sensitivity analysis suggests that the fleet size and structure are more sensitive to the increase of fare price than to the increase of passenger demand.

  3. Direct variational data assimilation algorithm for atmospheric chemistry data with transport and transformation model

    Science.gov (United States)

    Penenko, Alexey; Penenko, Vladimir; Nuterman, Roman; Baklanov, Alexander; Mahura, Alexander

    2015-11-01

    Atmospheric chemistry dynamics is studied with convection-diffusion-reaction model. The numerical Data Assimilation algorithm presented is based on the additive-averaged splitting schemes. It carries out ''fine-grained'' variational data assimilation on the separate splitting stages with respect to spatial dimensions and processes i.e. the same measurement data is assimilated to different parts of the split model. This design has efficient implementation due to the direct data assimilation algorithms of the transport process along coordinate lines. Results of numerical experiments with chemical data assimilation algorithm of in situ concentration measurements on real data scenario have been presented. In order to construct the scenario, meteorological data has been taken from EnviroHIRLAM model output, initial conditions from MOZART model output and measurements from Airbase database.

  4. Solving Optimal Pricing Model for Perishable Commodities with Imperialist Competitive Algorithm

    Directory of Open Access Journals (Sweden)

    Bo-Wen Liu

    2013-01-01

    Full Text Available The pricing problem for perishable commodities is important in manufacturing enterprise. In this study, a new model based on the profit maximization principle and a discrete demand function which is a negative binomial demand distribution is proposed. This model is used to find out the best combination for price and discount price. The computational results show that the optimal discount price equals the cost of the product. Because the demand functions which involves several different distributions is so complex that the model is hard to solve with normal numerical method. Thus we combine the model with exterior penalty function and applied a novel evolution algorithm-Imperialist Competitive Algorithm (ICA to solve the problem. Particle Swarm algorithm (PSO is also applied to solve the problem for comparison. The result shows that ICA has higher convergence rate and execution speed.

  5. Optimization model and algorithm for mixed traffic of urban road network with flow interference

    Institute of Scientific and Technical Information of China (English)

    SI BingFeng; LONG JianCeng; GAO ZiYou

    2008-01-01

    In this paper, the problem of interferences between motors and non-motors in ur-ban road mixed traffic network is considered and the corresponding link imped-ance function is presented based on travel demand, On the base of this, the main factors that influence travelers' traffic choices are all considered and a combined model including flow-split and assignment problem is proposed, Then a bi-level model with its algorithm for system optimization of urban road mixed traffic net-work is proposed. Finally the application of the model and its algorithm is illus-trated with a numerical example.

  6. Wang-Landau algorithm for continuous models and joint density of states.

    Science.gov (United States)

    Zhou, Chenggang; Schulthess, T C; Torbrügge, Stefan; Landau, D P

    2006-03-31

    We present a modified Wang-Landau algorithm for models with continuous degrees of freedom. We demonstrate this algorithm with the calculation of the joint density of states of ferromagnet Heisenberg models and a model polymer chain. The joint density of states contains more information than the density of states of a single variable-energy, but is also much more time consuming to calculate. We present strategies to significantly speed up this calculation for large systems over a large range of energy and order parameter.

  7. Wang-Landau Algorithm for Continuous Models and Joint Density of States

    Science.gov (United States)

    Zhou, Chenggang; Schulthess, T. C.; Torbrügge, Stefan; Landau, D. P.

    2006-03-01

    We present a modified Wang-Landau algorithm for models with continuous degrees of freedom. We demonstrate this algorithm with the calculation of the joint density of states of ferromagnet Heisenberg models and a model polymer chain. The joint density of states contains more information than the density of states of a single variable-energy, but is also much more time consuming to calculate. We present strategies to significantly speed up this calculation for large systems over a large range of energy and order parameter.

  8. Combined Parameter and State Estimation Algorithms for Multivariable Nonlinear Systems Using MIMO Wiener Models

    Directory of Open Access Journals (Sweden)

    Houda Salhi

    2016-01-01

    Full Text Available This paper deals with the parameter estimation problem for multivariable nonlinear systems described by MIMO state-space Wiener models. Recursive parameters and state estimation algorithms are presented using the least squares technique, the adjustable model, and the Kalman filter theory. The basic idea is to estimate jointly the parameters, the state vector, and the internal variables of MIMO Wiener models based on a specific decomposition technique to extract the internal vector and avoid problems related to invertibility assumption. The effectiveness of the proposed algorithms is shown by an illustrative simulation example.

  9. Multiscale models and approximation algorithms for protein electrostatics

    CERN Document Server

    Bardhan, Jaydeep P

    2015-01-01

    Electrostatic forces play many important roles in molecular biology, but are hard to model due to the complicated interactions between biomolecules and the surrounding solvent, a fluid composed of water and dissolved ions. Continuum model have been surprisingly successful for simple biological questions, but fail for important problems such as understanding the effects of protein mutations. In this paper we highlight the advantages of boundary-integral methods for these problems, and our use of boundary integrals to design and test more accurate theories. Examples include a multiscale model based on nonlocal continuum theory, and a nonlinear boundary condition that captures atomic-scale effects at biomolecular surfaces.

  10. A Multi-model EKF Integrated Navigation Algorithm for Deep Water AUV

    Directory of Open Access Journals (Sweden)

    Dongdong Li

    2016-01-01

    Full Text Available A novel integrated navigation algorithm, multi-model EKF (Extended Kalman Filter integrated navigation algorithm, is presented in this paper for the deep water autonomous underwater vehicle. When a deep water vehicle is performing tasks in the deep sea, the navigation error will accumulate over time, if it relies solely on its own inertial navigation system. In order to get a more accurate position for the deep water vehicle online, an integrated navigation system is constructed by adding the acoustic navigation system. And because it is difficult to establish the kinematic model and the measurement model accurately for the deep water vehicle in the underwater environment, we propose the Multi model EKF integrated navigation algorithm, and estimate the measurement errors of beacons online. Then we can estimate the position of the deep water vehicle more accurately. The new algorithm has been tested by both analyses and field experiment data (the lake and sea trial data, and results show that the multi-model EKF integrated navigation algorithm proposed in this paper significantly improves the navigation accuracy for the deep water vehicle.

  11. A Multi-Model EKF Integrated Navigation Algorithm for Deep Water AUV

    Directory of Open Access Journals (Sweden)

    Dongdong Li

    2016-01-01

    Full Text Available A novel integrated navigation algorithm, multi-model EKF (Extended Kalman Filter integrated navigation algorithm, is presented in this paper for the deep water autonomous underwater vehicle. When a deep water vehicle is performing tasks in the deep sea, the navigation error will accumulate over time, if it relies solely on its own inertial navigation system. In order to get a more accurate position for the deep water vehicle online, an integrated navigation system is constructed by adding the acoustic navigation system. And because it is difficult to establish the kinematic model and the measurement model accurately for the deep water vehicle in the underwater environment, we propose the Multi-model EKF integrated navigation algorithm, and estimate the measurement errors of beacons online. Then we can estimate the position of the deep water vehicle more accurately. The new algorithm has been tested by both analyses and field experiment data (the lake and sea trial data, and results show that the multi-model EKF integrated navigation algorithm proposed in this paper significantly improves the navigation accuracy for the deep water vehicle.

  12. STUDY ON THE METEOROLOGICAL PREDICTION MODEL USING THE LEARNING ALGORITHM OF NEURAL ENSEMBLE BASED ON PSO ALGORITHMS

    Institute of Scientific and Technical Information of China (English)

    WU Jian-sheng; JIN Long

    2009-01-01

    Because of the difficulty in deciding on the structure of BP neural network in operational meteorological application and the tendency tbr the network to transform to an issue of local solution,a hybrid Particle Swarm Optimization Algorithm based on Artificial Neural Network (PSO-BP) model is proposed for monthly mean rainfall of the whole area of Guangxi. It combines Particle Swarm Optimization (PSO) with BP,that is,the number of hidden nodes and connection weights are optimized by the implementation of PSO operation. The method produces a better network architecture and initial connection weights,trains the traditional backward propagation again by training samples. The ensemble strategy is carried out for the linear programming to calculate the best weights based on the "east sum of the error absolute value" as the optimal rule. The weighted coefficient of each ensemble individual is obtained. The results show that the method can effectively improve learning and generalization ability of the neural network.

  13. Approximation Algorithms for the Highway Problem under the Coupon Model

    Science.gov (United States)

    Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei

    When a store sells items to customers, the store wishes to decide the prices of items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy the items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi customer is interested in an interval on the line of the items) and the cycle highway problem (in which each customer is interested in an interval on the cycle of the items), and show approximation algorithms for the line highway problem and the cycle highway problem in which the smallest valuation is s and the largest valuation is l (this is called an [s, l]-valuation setting) or all valuations are identical (this is called a single valuation setting).

  14. Evaluation of models generated via hybrid evolutionary algorithms ...

    African Journals Online (AJOL)

    2016-04-02

    Apr 2, 2016 ... Cyanobacteria are responsible for many problems in drinking water treatment works ... events of high cyanobacterial cell concentrations in the source water is evident. ..... Improvement of the models was achieved by structure.

  15. Model classification rate control algorithm for video coding

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A model classification rate control method for video coding is proposed. The macro-blocks are classified according to their prediction errors, and different parameters are used in the rate-quantization and distortion-quantization model.The different model parameters are calculated from the previous frame of the same type in the process of coding. These models are used to estimate the relations among rate, distortion and quantization of the current frame. Further steps,such as R-D optimization based quantization adjustment and smoothing of quantization of adjacent macroblocks, are used to improve the quality. The results of the experiments prove that the technique is effective and can be realized easily. The method presented in the paper can be a good way for MPEG and H. 264 rate control.

  16. An Introduction to Model Selection: Tools and Algorithms

    Directory of Open Access Journals (Sweden)

    Sébastien Hélie

    2006-03-01

    Full Text Available Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans prevents the use of Popper’s falsification principle (which is the norm in other sciences. Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood, as well as five methods to compare model fits (the likelihood ratio test, Akaike’s information criterion, the Bayesian information criterion, bootstrapping and cross-validation, are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed.

  17. System convergence in transport models: algorithms efficiency and output uncertainty

    DEFF Research Database (Denmark)

    Rich, Jeppe; Nielsen, Otto Anker

    2015-01-01

    much in the literature. The paper first investigates several variants of the Method of Successive Averages (MSA) by simulation experiments on a toy-network. It is found that the simulation experiments produce support for a weighted MSA approach. The weighted MSA approach is then analysed on large......-scale in the Danish National Transport Model (DNTM). It is revealed that system convergence requires that either demand or supply is without random noise but not both. In that case, if MSA is applied to the model output with random noise, it will converge effectively as the random effects are gradually dampened...... in the MSA process. In connection to DNTM it is shown that MSA works well when applied to travel-time averaging, whereas trip averaging is generally infected by random noise resulting from the assignment model. The latter implies that the minimum uncertainty in the final model output is dictated...

  18. A Spherical Model Based Keypoint Descriptor and Matching Algorithm for Omnidirectional Images

    Directory of Open Access Journals (Sweden)

    Guofeng Tong

    2014-04-01

    Full Text Available Omnidirectional images generally have nonlinear distortion in radial direction. Unfortunately, traditional algorithms such as scale-invariant feature transform (SIFT and Descriptor-Nets (D-Nets do not work well in matching omnidirectional images just because they are incapable of dealing with the distortion. In order to solve this problem, a new voting algorithm is proposed based on the spherical model and the D-Nets algorithm. Because the spherical-based keypoint descriptor contains the distortion information of omnidirectional images, the proposed matching algorithm is invariant to distortion. Keypoint matching experiments are performed on three pairs of omnidirectional images, and comparison is made among the proposed algorithm, the SIFT and the D-Nets. The result shows that the proposed algorithm is more robust and more precise than the SIFT, and the D-Nets in matching omnidirectional images. Comparing with the SIFT and the D-Nets, the proposed algorithm has two main advantages: (a there are more real matching keypoints; (b the coverage range of the matching keypoints is wider, including the seriously distorted areas.

  19. An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.

    Science.gov (United States)

    Zhang, Ye; Yu, Tenglong; Wang, Wenwu

    2014-01-01

    Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  20. An Analysis Dictionary Learning Algorithm under a Noisy Data Model with Orthogonality Constraint

    Directory of Open Access Journals (Sweden)

    Ye Zhang

    2014-01-01

    Full Text Available Two common problems are often encountered in analysis dictionary learning (ADL algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high, as represented by the Analysis K-SVD (AK-SVD algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  1. How effective and efficient are multiobjective evolutionary algorithms at hydrologic model calibration?

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2006-01-01

    Full Text Available This study provides a comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO tools' relative effectiveness in calibrating hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (ε-NSGAII, the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA, and the Strength Pareto Evolutionary Algorithm 2 (SPEA2. This study uses three test cases to compare the algorithms' performances: (1 a standardized test function suite from the computer science literature, (2 a benchmark hydrologic calibration test case for the Leaf River near Collins, Mississippi, and (3 a computationally intensive integrated surface-subsurface model application in the Shale Hills watershed in Pennsylvania. One challenge and contribution of this work is the development of a methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques. Overall, SPEA2 attained competitive to superior results for most of the problems tested in this study. The primary strengths of the SPEA2 algorithm lie in its search reliability and its diversity preservation operator. The biggest challenge in maximizing the performance of SPEA2 lies in specifying an effective archive size without a priori knowledge of the Pareto set. In practice, this would require significant trial-and-error analysis, which is problematic for more complex, computationally intensive calibration applications. ε-NSGAII appears to be superior to MOSCEM-UA and competitive with SPEA2 for hydrologic model calibration. ε-NSGAII's primary strength lies in its ease-of-use due to its dynamic population sizing and archiving which lead to rapid convergence to very high quality solutions with minimal user input. MOSCEM-UA is best suited for hydrologic model calibration applications that have small

  2. Reasoning with probabilistic and deterministic graphical models exact algorithms

    CERN Document Server

    Dechter, Rina

    2013-01-01

    Graphical models (e.g., Bayesian and constraint networks, influence diagrams, and Markov decision processes) have become a central paradigm for knowledge representation and reasoning in both artificial intelligence and computer science in general. These models are used to perform many reasoning tasks, such as scheduling, planning and learning, diagnosis and prediction, design, hardware and software verification, and bioinformatics. These problems can be stated as the formal tasks of constraint satisfaction and satisfiability, combinatorial optimization, and probabilistic inference. It is well

  3. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  4. Efficient Algorithms for Parsing the DOP Model? A Reply to Joshua Goodman

    CERN Document Server

    Bod, R

    1996-01-01

    This note is a reply to Joshua Goodman's paper "Efficient Algorithms for Parsing the DOP Model" (Goodman, 1996; cmp-lg/9604008). In his paper, Goodman makes a number of claims about (my work on) the Data-Oriented Parsing model (Bod, 1992-1996). This note shows that some of these claims must be mistaken.

  5. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  6. New Algorithm for 3D Facial Model Reconstruction and Its Application in Virtual Reality

    Institute of Scientific and Technical Information of China (English)

    Rong-Hua Liang; Zhi-Geng Pan; Chun Chen

    2004-01-01

    3D human face model reconstruction is essential to the generation of facial animations that is widely used in the field of virtual reality(VR).The main issues of 3D facial model reconstruction based on images by vision technologies are in twofold: one is to select and match the corresponding features of face from two images with minimal interaction and the other is to generate the realistic-looking human face model.In this paper,a new algorithm for realistic-looking face reconstruction is presented based on stereo vision.Firstly,a pattern is printed and attached to a planar surface for camera calibration,and corners generation and corners matching between two images are performed by integrating modified image pyramid Lucas-Kanade(PLK)algorithm and local adjustment algorithm,and then 3D coordinates of corners are obtained by 3D reconstruction.Individual face model is generated by the deformation of general 3D model and interpolation of the features.Finally,realisticlooking human face model is obtained after texture mapping and eyes modeling.In addition,some application examples in the field of VR are given.Experimental result shows that the proposed algorithm is robust and the 3D model is photo-realistic.

  7. Design of Learning Model of Logic and Algorithms Based on APOS Theory

    Science.gov (United States)

    Hartati, Sulis Janu

    2014-01-01

    This research questions were "how do the characteristics of learning model of logic & algorithm according to APOS theory" and "whether or not these learning model can improve students learning outcomes". This research was conducted by exploration, and quantitative approach. Exploration used in constructing theory about the…

  8. An integer multi-objective optimization model and an enhanced non-dominated sorting genetic algorithm for contraflow scheduling problem

    Institute of Scientific and Technical Information of China (English)

    李沛恒; 楼颖燕

    2015-01-01

    To determine the onset and duration of contraflow evacuation, a multi-objective optimization (MOO) model is proposed to explicitly consider both the total system evacuation time and the operation cost. A solution algorithm that enhances the popular evolutionary algorithm NSGA-II is proposed to solve the model. The algorithm incorporates preliminary results as prior information and includes a meta-model as an alternative to evaluation by simulation. Numerical analysis of a case study suggests that the proposed formulation and solution algorithm are valid, and the enhanced NSGA-II outperforms the original algorithm in both convergence to the true Pareto-optimal set and solution diversity.

  9. A proposed Fast algorithm to construct the system matrices for a reduced-order groundwater model

    Science.gov (United States)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2017-04-01

    Past research has demonstrated that a reduced-order model (ROM) can be two-to-three orders of magnitude smaller than the original model and run considerably faster with acceptable error. A standard method to construct the system matrices for a ROM is Proper Orthogonal Decomposition (POD), which projects the system matrices from the full model space onto a subspace whose range spans the full model space but has a much smaller dimension than the full model space. This projection can be prohibitively expensive to compute if it must be done repeatedly, as with a Monte Carlo simulation. We propose a Fast Algorithm to reduce the computational burden of constructing the system matrices for a parameterized, reduced-order groundwater model (i.e. one whose parameters are represented by zones or interpolation functions). The proposed algorithm decomposes the expensive system matrix projection into a set of simple scalar-matrix multiplications. This allows the algorithm to efficiently construct the system matrices of a POD reduced-order model at a significantly reduced computational cost compared with the standard projection-based method. The developed algorithm is applied to three test cases for demonstration purposes. The first test case is a small, two-dimensional, zoned-parameter, finite-difference model; the second test case is a small, two-dimensional, interpolated-parameter, finite-difference model; and the third test case is a realistically-scaled, two-dimensional, zoned-parameter, finite-element model. In each case, the algorithm is able to accurately and efficiently construct the system matrices of the reduced-order model.

  10. Proton Exchange Membrane Fuel Cell Modeling Based on Seeker Optimization Algorithm

    Institute of Scientific and Technical Information of China (English)

    LI Qi; DAI Chao-hua; Chen Wei-rong; JIA Jun-bo; HAN Ming

    2008-01-01

    Seeker optimization algorithm (SOA) has applications in continuous space of swarm intelligence. In the fields of proton ex-change membrane fuel cell (PEMFC) modeling, SOA was proposed to research a set of optimized parameters in PEMFC polariza-tion curve model. Experimental result showed that the mean square error of the optimization modeling strategy was only 6.9 × 10-23. Hence, the optimization model could fit the experiment data with high precision.

  11. RECONFIGURING POWER SYSTEMS TO MINIMIZE CASCADING FAILURES: MODELS AND ALGORITHMS

    Energy Technology Data Exchange (ETDEWEB)

    Bienstock, Daniel

    2014-04-11

    the main goal of this project was to develop new scientific tools, based on optimization techniques, with the purpose of controlling and modeling cascading failures of electrical power transmission systems. We have developed a high-quality tool for simulating cascading failures. The problem of how to control a cascade was addressed, with the aim of stopping the cascade with a minimum of load lost. Yet another aspect of cascade is the investigation of which events would trigger a cascade, or more appropriately the computation of the most harmful initiating event given some constraint on the severity of the event. One common feature of the cascade models described (indeed, of several of the cascade models found in the literature) is that we study thermally-induced line tripping. We have produced a study that accounts for exogenous randomness (e.g. wind and ambient temperature) that could affect the thermal behavior of a line, with a focus on controlling the power flow of the line while maintaining safe probability of line overload. This was done by means of a rigorous analysis of a stochastic version of the heat equation. we incorporated a model of randomness in the behavior of wind power output; again modeling an OPF-like problem that uses chance-constraints to maintain low probability of line overloads; this work has been continued so as to account for generator dynamics as well.

  12. Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm

    Directory of Open Access Journals (Sweden)

    Zhehuang Huang

    2015-01-01

    Full Text Available Artificial fish swarm algorithm (AFSA is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  13. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    Science.gov (United States)

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  14. The algorithmic anatomy of model-based evaluation.

    Science.gov (United States)

    Daw, Nathaniel D; Dayan, Peter

    2014-11-05

    Despite many debates in the first half of the twentieth century, it is now largely a truism that humans and other animals build models of their environments and use them for prediction and control. However, model-based (MB) reasoning presents severe computational challenges. Alternative, computationally simpler, model-free (MF) schemes have been suggested in the reinforcement learning literature, and have afforded influential accounts of behavioural and neural data. Here, we study the realization of MB calculations, and the ways that this might be woven together with MF values and evaluation methods. There are as yet mostly only hints in the literature as to the resulting tapestry, so we offer more preview than review.

  15. Nonlinear unmixing of hyperspectral images: models and algorithms

    CERN Document Server

    Dobigeon, Nicolas; Richard, Cédric; Bermudez, José C M; McLaughlin, Stephen; Hero, Alfred O

    2013-01-01

    When considering the problem of unmixing hyperspectral images, most of the literature in the geoscience and image processing areas rely on the widely acknowledged linear mixing model (LMM). However, in specific but common contexts, the LMM may be not valid and other nonlinear models should be invoked. Consequently, over the last few years, several significant contributions have been proposed to overcome the limitations inherent in the LMM. In this paper, we present an overview of recent advances that deal with the nonlinear unmixing problem. The main nonlinear models are introduced and their validity discussed. Then, we describe the main classes of unmixing strategies designed to solve the problem in supervised and unsupervised frameworks. Finally, the problem of detecting nonlinear mixtures in hyperspectral images is addressed.

  16. A Convex Optimization Model and Algorithm for Retinex

    Directory of Open Access Journals (Sweden)

    Qing-Nan Zhao

    2017-01-01

    Full Text Available Retinex is a theory on simulating and explaining how human visual system perceives colors under different illumination conditions. The main contribution of this paper is to put forward a new convex optimization model for Retinex. Different from existing methods, the main idea is to rewrite a multiplicative form such that the illumination variable and the reflection variable are decoupled in spatial domain. The resulting objective function involves three terms including the Tikhonov regularization of the illumination component, the total variation regularization of the reciprocal of the reflection component, and the data-fitting term among the input image, the illumination component, and the reciprocal of the reflection component. We develop an alternating direction method of multipliers (ADMM to solve the convex optimization model. Numerical experiments demonstrate the advantages of the proposed model which can decompose an image into the illumination and the reflection components.

  17. Quadratic adaptive algorithm for solving cardiac action potential models.

    Science.gov (United States)

    Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing

    2016-10-01

    An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights

  18. a Model-Based Autofocus Algorithm for Ultrasonic Imaging Using a Flexible Array

    Science.gov (United States)

    Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.

    2010-02-01

    Autofocus is a methodology for estimating and correcting errors in the assumed parameters of an imaging algorithm. It provides improved image quality and, therefore, better defect detection and characterization capabilities. In this paper, we present a new autofocus algorithm developed specifically for ultrasonic non-destructive testing and evaluation (NDE). We consider the estimation and correction of errors in the assumed element positions for a flexible ultrasonic array coupled to a specimen with an unknown surface profile. The algorithm performs a weighted least-squares minimization of the time-of-arrival errors in the echo data using assumed models for known features in the specimen. The algorithm is described for point and planar specimen features and demonstrated using experimental data from a flexible array prototype.

  19. POLYNOMIAL MODEL BASED FAST FRACTIONAL PIXEL SEARCH ALGORITHM FOR H.264/AVC

    Institute of Scientific and Technical Information of China (English)

    Xi Yinglai; Hao Chongyang; Lai Changcai

    2006-01-01

    This paper proposed a novel fast fractional pixel search algorithm based on polynomial model.With the analysis of distribution characteristics of motion compensation error surface inside fractional pixel searching window, the matching error is fitted with parabola along horizontal and vertical direction respectively. The proposed searching strategy needs to check only 6 points rather than 16 or 24 points, which are used in the Hierarchical Fractional Pel Search algorithm (HFPS) for 1/4-pel and 1/8-pel Motion Estimation (ME). The experimental results show that the proposed algorithm shows very good capability in keeping the rate distortion performance while reduces computation load to a large extent compared with HFPS algorithm.

  20. A smoothing expectation and substitution algorithm for the semiparametric accelerated failure time frailty model.

    Science.gov (United States)

    Johnson, Lynn M; Strawderman, Robert L

    2012-09-20

    This paper proposes an estimation procedure for the semiparametric accelerated failure time frailty model that combines smoothing with an Expectation and Maximization-like algorithm for estimating equations. The resulting algorithm permits simultaneous estimation of the regression parameter, the baseline cumulative hazard, and the parameter indexing a general frailty distribution. We develop novel moment-based estimators for the frailty parameter, including a generalized method of moments estimator. Standard error estimates for all parameters are easily obtained using a randomly weighted bootstrap procedure. For the commonly used gamma frailty distribution, the proposed algorithm is very easy to implement using widely available numerical methods. Simulation results demonstrate that the algorithm performs very well in this setting. We re-analyz several previously analyzed data sets for illustrative purposes.

  1. Gas Emission Prediction Model of Coal Mine Based on CSBP Algorithm

    Directory of Open Access Journals (Sweden)

    Xiong Yan

    2016-01-01

    Full Text Available In view of the nonlinear characteristics of gas emission in a coal working face, a prediction method is proposed based on cuckoo search algorithm optimized BP neural network (CSBP. In the CSBP algorithm, the cuckoo search is adopted to optimize weight and threshold parameters of BP network, and obtains the global optimal solutions. Furthermore, the twelve main affecting factors of the gas emission in the coal working face are taken as input vectors of CSBP algorithm, the gas emission is acted as output vector, and then the prediction model of BP neural network with optimal parameters is established. The results show that the CSBP algorithm has batter generalization ability and higher prediction accuracy, and can be utilized effectively in the prediction of coal mine gas emission.

  2. All-pairs Shortest Path Algorithm based on MPI+CUDA Distributed Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Qingshuang Wu

    2013-12-01

    Full Text Available In view of the problem that computing shortest paths in a graph is a complex and time-consuming process, and the traditional algorithm that rely on the CPU as computing unit solely can't meet the demand of real-time processing, in this paper, we present an all-pairs shortest paths algorithm using MPI+CUDA hybrid programming model, which can take use of the overwhelming computing power of the GPU cluster to speed up the processing. This proposed algorithm can combine the advantages of MPI and CUDA programming model, and can realize two-level parallel computing. In the cluster-level, we take use of the MPI programming model to achieve a coarse-grained parallel computing between the computational nodes of the GPU cluster. In the node-level, we take use of the CUDA programming model to achieve a GPU-accelerated fine grit parallel computing in each computational node internal. The experimental results show that the MPI+CUDA-based parallel algorithm can take full advantage of the powerful computing capability of the GPU cluster, and can achieve about hundreds of time speedup; The whole algorithm has good computing performance, reliability and scalability, and it is able to meet the demand of real-time processing of massive spatial shortest path analysis

  3. Fitting Social Network Models Using Varying Truncation Stochastic Approximation MCMC Algorithm

    KAUST Repository

    Jin, Ick Hoon

    2013-10-01

    The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  4. A novel computer algorithm for modeling and treating mandibular fractures: A pilot study.

    Science.gov (United States)

    Rizzi, Christopher J; Ortlip, Timothy; Greywoode, Jewel D; Vakharia, Kavita T; Vakharia, Kalpesh T

    2017-02-01

    To describe a novel computer algorithm that can model mandibular fracture repair. To evaluate the algorithm as a tool to model mandibular fracture reduction and hardware selection. Retrospective pilot study combined with cross-sectional survey. A computer algorithm utilizing Aquarius Net (TeraRecon, Inc, Foster City, CA) and Adobe Photoshop CS6 (Adobe Systems, Inc, San Jose, CA) was developed to model mandibular fracture repair. Ten different fracture patterns were selected from nine patients who had already undergone mandibular fracture repair. The preoperative computed tomography (CT) images were processed with the computer algorithm to create virtual images that matched the actual postoperative three-dimensional CT images. A survey comparing the true postoperative image with the virtual postoperative images was created and administered to otolaryngology resident and attending physicians. They were asked to rate on a scale from 0 to 10 (0 = completely different; 10 = identical) the similarity between the two images in terms of the fracture reduction and fixation hardware. Ten mandible fracture cases were analyzed and processed. There were 15 survey respondents. The mean score for overall similarity between the images was 8.41 ± 0.91; the mean score for similarity of fracture reduction was 8.61 ± 0.98; and the mean score for hardware appearance was 8.27 ± 0.97. There were no significant differences between attending and resident responses. There were no significant differences based on fracture location. This computer algorithm can accurately model mandibular fracture repair. Images created by the algorithm are highly similar to true postoperative images. The algorithm can potentially assist a surgeon planning mandibular fracture repair. 4. Laryngoscope, 2016 127:331-336, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  5. A computationally efficient depression-filling algorithm for digital elevation models, applied to proglacial lake drainage

    Science.gov (United States)

    Berends, Constantijn J.; van de Wal, Roderik S. W.

    2016-12-01

    Many processes govern the deglaciation of ice sheets. One of the processes that is usually ignored is the calving of ice in lakes that temporarily surround the ice sheet. In order to capture this process a "flood-fill algorithm" is needed. Here we present and evaluate several optimizations to a standard flood-fill algorithm in terms of computational efficiency. As an example, we determine the land-ocean mask for a 1 km resolution digital elevation model (DEM) of North America and Greenland, a geographical area of roughly 7000 by 5000 km (roughly 35 million elements), about half of which is covered by ocean. Determining the land-ocean mask with our improved flood-fill algorithm reduces computation time by 90 % relative to using a standard stack-based flood-fill algorithm. This implies that it is now feasible to include the calving of ice in lakes as a dynamical process inside an ice-sheet model. We demonstrate this by using bedrock elevation, ice thickness and geoid perturbation fields from the output of a coupled ice-sheet-sea-level equation model at 30 000 years before present and determine the extent of Lake Agassiz, using both the standard and improved versions of the flood-fill algorithm. We show that several optimizations to the flood-fill algorithm used for filling a depression up to a water level, which is not defined beforehand, decrease the computation time by up to 99 %. The resulting reduction in computation time allows determination of the extent and volume of depressions in a DEM over large geographical grids or repeatedly over long periods of time, where computation time might otherwise be a limiting factor. The algorithm can be used for all glaciological and hydrological models, which need to trace the evolution over time of lakes or drainage basins in general.

  6. Testing of a pulsed He supersonic beam for plasma edge diagnostic in the TJ-IU torsatron

    Science.gov (United States)

    Tabarés, F. L.; Tafalla, D.; Herrero, V.; Tanarro, I.

    1997-02-01

    A new, compact atomic beam source based on the supersonic expansion of He has been developed for application as a plasma edge diagnostic. The beam is produced from a pulsed valve with a duration between 0.2 to 2 ms and a nominal repetition rate 10 and a divergence of ± 1° have been achieved at stagnation pressures below 2 bar. The diagnostic has been tested in ECRH plasmas on the TJ-IU torsatron, representing the first application of a supersonic beam to plasma characterization, to our knowledge. Operational conditions which minimized the total amount of He injected into the plasma were chosen. Non-perturbative injection conditions in the low density plasmas could be obtained at local He densities of ⋍ 1 × 10 11 cm -3 and a beam diameter < 1 cm. Due to the relatively low electron density of the ECRH plasmas, and to the good penetration characteristics of the supersonic He beam, the diagnostic could be used up to fairly low values of the normalized plasma minor radius, {r}/{a} (a = 12 cm) . Details of the optimization of the atomic beam diagnostics and typical results for steady state conditions in the TJ-IU plasmas are presented.

  7. Testing of a pulsed He supersonic beam for plasma edge diagnostic in the TJ-IU torsatron

    Energy Technology Data Exchange (ETDEWEB)

    Tabares, F.L. [Association EURATOM/CIEMAT, Madrid (Spain); Tafalla, D. [Association EURATOM/CIEMAT, Madrid (Spain); Herrero, V. [Instituto de Estructura de la Materia, CSIC, 28006 Madrid (Spain); Tanarro, I. [Instituto de Estructura de la Materia, CSIC, 28006 Madrid (Spain)

    1997-02-01

    A new, compact atomic beam source based on the supersonic expansion of He has been developed for application as a plasma edge diagnostic. The beam is produced from a pulsed valve with a duration between 0.2 to 2 ms and a nominal repetition rate <500 Hz. A terminal speed ratio >10 and a divergence of {+-}1 have been achieved at stagnation pressures below 2 bar. The diagnostic has been tested in ECRH plasmas on the TJ-IU torsatron, representing the first application of a supersonic beam to plasma characterization, to our knowledge. Operational conditions which minimized the total amount of He injected into the plasma were chosen. Non-perturbative injection conditions in the low density plasmas could be obtained at local He densities of {approx_equal}1 x 10{sup 11} cm{sup -3} and a beam diameter <1 cm. Due to the relatively low electron density of the ECRH plasmas, and to the good penetration characteristics of the supersonic He beam, the diagnostic could be used up to fairly low values of the normalized plasma minor radius, r/a (a=12 cm). Details of the optimization of the atomic beam diagnostics and typical results for steady state conditions in the TJ-IU plasmas are presented. (orig.).

  8. A coupled model tree genetic algorithm scheme for flow and water quality predictions in watersheds

    Science.gov (United States)

    Preis, Ami; Ostfeld, Avi

    2008-02-01

    SummaryThe rapid advance in information processing systems along with the increasing data availability have directed research towards the development of intelligent systems that evolve models of natural phenomena automatically. This is the discipline of data driven modeling which is the study of algorithms that improve automatically through experience. Applications of data driven modeling range from data mining schemes that discover general rules in large data sets, to information filtering systems that automatically learn users' interests. This study presents a data driven modeling algorithm for flow and water quality load predictions in watersheds. The methodology is comprised of a coupled model tree-genetic algorithm scheme. The model tree predicts flow and water quality constituents while the genetic algorithm is employed for calibrating the model tree parameters. The methodology is demonstrated through base runs and sensitivity analysis for daily flow and water quality load predictions on a watershed in northern Israel. The method produced close fits in most cases, but was limited in estimating the peak flows and water quality loads.

  9. New Virtual Cutting Algorithms for 3D Surface Model Reconstructed from Medical Images

    Institute of Scientific and Technical Information of China (English)

    WANG Wei-hong; QIN Xu-Jia

    2006-01-01

    This paper proposes a practical algorithms of plane cutting, stereo clipping and arbitrary cutting for 3D surface model reconstructed from medical images. In plane cutting and stereo clipping algorithms, the 3D model is cut by plane or polyhedron. Lists of edge and vertex in every cut plane are established. From these lists the boundary contours are created and their relationship of embrace is ascertained. The region closed by the contours is triangulated using Delaunay triangulation algorithm. Arbitrary cutting operation creates cutting curve interactively.The cut model still maintains its correct topology structure. With these operations,tissues inside can be observed easily and it can aid doctors to diagnose. The methods can also be used in surgery planning of radiotherapy.

  10. An API for Integrating Spatial Context Models with Spatial Reasoning Algorithms

    DEFF Research Database (Denmark)

    Kjærgaard, Mikkel Baun

    2006-01-01

    The integration of context-aware applications with spatial context models is often done using a common query language. However, algorithms that estimate and reason about spatial context information can benefit from a tighter integration. An object-oriented API makes such integration possible and ...... modeling. The utility of the API is evaluated in several real-world cases from an indoor location system, and spans several types of spatial reasoning algorithms.......The integration of context-aware applications with spatial context models is often done using a common query language. However, algorithms that estimate and reason about spatial context information can benefit from a tighter integration. An object-oriented API makes such integration possible...

  11. GPU-based single-cluster algorithm for the simulation of the Ising model

    Science.gov (United States)

    Komura, Yukihiro; Okabe, Yutaka

    2012-02-01

    We present the GPU calculation with the common unified device architecture (CUDA) for the Wolff single-cluster algorithm of the Ising model. Proposing an algorithm for a quasi-block synchronization, we realize the Wolff single-cluster Monte Carlo simulation with CUDA. We perform parallel computations for the newly added spins in the growing cluster. As a result, the GPU calculation speed for the two-dimensional Ising model at the critical temperature with the linear size L = 4096 is 5.60 times as fast as the calculation speed on a current CPU core. For the three-dimensional Ising model with the linear size L = 256, the GPU calculation speed is 7.90 times as fast as the CPU calculation speed. The idea of quasi-block synchronization can be used not only in the cluster algorithm but also in many fields where the synchronization of all threads is required.

  12. GPU-based single-cluster algorithm for the simulation of the Ising model

    CERN Document Server

    Komura, Yukihiro

    2011-01-01

    We present the GPU calculation with the common unified device architecture (CUDA) for the Wolff single-cluster algorithm of the Ising model. Proposing an algorithm for a quasi-block synchronization, we realize the Wolff single-cluster Monte Carlo simulation with CUDA. We perform parallel computations for the newly added spins in the growing cluster. As a result, the GPU calculation speed for the two-dimensional Ising model at the critical temperature with the linear size L=4096 is 5.60 times as fast as the calculation speed on a current CPU core. For the three-dimensional Ising model with the linear size L=256, the GPU calculation speed is 7.90 times as fast as the CPU calculation speed. The idea of quasi-block synchronization can be used not only in the cluster algorithm but also in many fields where the synchronization of all threads is required.

  13. Stellar Structure Modeling using a Parallel Genetic Algorithm for Objective Global Optimization

    CERN Document Server

    Metcalfe, T S

    2002-01-01

    Genetic algorithms are a class of heuristic search techniques that apply basic evolutionary operators in a computational setting. We have designed a fully parallel and distributed hardware/software implementation of the generalized optimization subroutine PIKAIA, which utilizes a genetic algorithm to provide an objective determination of the globally optimal parameters for a given model against an observational data set. We have used this modeling tool in the context of white dwarf asteroseismology, i.e., the art and science of extracting physical and structural information about these stars from observations of their oscillation frequencies. The efficient, parallel exploration of parameter-space made possible by genetic-algorithm-based numerical optimization led us to a number of interesting physical results: (1) resolution of a hitherto puzzling discrepancy between stellar evolution models and prior asteroseismic inferences of the surface helium layer mass for a DBV white dwarf; (2) precise determination of...

  14. Stable reduced-order models of generalized dynamical systems using coordinate-transformed Arnoldi algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Silveira, L.M.; Kamon, M.; Elfadel, I.; White, J. [Massachusetts Inst. of Technology, Cambridge, MA (United States)

    1996-12-31

    Model order reduction based on Krylov subspace iterative methods has recently emerged as a major tool for compressing the number of states in linear models used for simulating very large physical systems (VLSI circuits, electromagnetic interactions). There are currently two main methods for accomplishing such a compression: one is based on the nonsymmetric look-ahead Lanczos algorithm that gives a numerically stable procedure for finding Pade approximations, while the other is based on a less well characterized Arnoldi algorithm. In this paper, we show that for certain classes of generalized state-space systems, the reduced-order models produced by a coordinate-transformed Arnoldi algorithm inherit the stability of the original system. Complete Proofs of our results will be given in the final paper.

  15. A Business Intelligence Model to Predict Bankruptcy using Financial Domain Ontology with Association Rule Mining Algorithm

    CERN Document Server

    Martin, A; Venkatesan, Dr V Prasanna

    2011-01-01

    Today in every organization financial analysis provides the basis for understanding and evaluating the results of business operations and delivering how well a business is doing. This means that the organizations can control the operational activities primarily related to corporate finance. One way that doing this is by analysis of bankruptcy prediction. This paper develops an ontological model from financial information of an organization by analyzing the Semantics of the financial statement of a business. One of the best bankruptcy prediction models is Altman Z-score model. Altman Z-score method uses financial rations to predict bankruptcy. From the financial ontological model the relation between financial data is discovered by using data mining algorithm. By combining financial domain ontological model with association rule mining algorithm and Zscore model a new business intelligence model is developed to predict the bankruptcy.

  16. Systems approach to modeling the Token Bucket algorithm in computer networks

    Directory of Open Access Journals (Sweden)

    Ahmed N. U.

    2002-01-01

    Full Text Available In this paper, we construct a new dynamic model for the Token Bucket (TB algorithm used in computer networks and use systems approach for its analysis. This model is then augmented by adding a dynamic model for a multiplexor at an access node where the TB exercises a policing function. In the model, traffic policing, multiplexing and network utilization are formally defined. Based on the model, we study such issues as (quality of service QoS, traffic sizing and network dimensioning. Also we propose an algorithm using feedback control to improve QoS and network utilization. Applying MPEG video traces as the input traffic to the model, we verify the usefulness and effectiveness of our model.

  17. Modelling the behavior of systems : Basic concepts and algorithms

    NARCIS (Netherlands)

    Cotroneo, T; Willems, JC; Powell, MJD; Scholtes, S

    2000-01-01

    In this paper we introduce the behavioral approach as a mathematical language for describing dynamical systems, in particular systems modeled by high order constant coefficient linear differential equations. We investigate what data have to be added in order to express the influence of the environme

  18. Modelling the behavior of systems : Basic concepts and algorithms

    NARCIS (Netherlands)

    Cotroneo, T; Willems, JC; Powell, MJD; Scholtes, S

    2000-01-01

    In this paper we introduce the behavioral approach as a mathematical language for describing dynamical systems, in particular systems modeled by high order constant coefficient linear differential equations. We investigate what data have to be added in order to express the influence of the

  19. Reclaiming the energy of a schedule: models and algorithms

    CERN Document Server

    Aupy, Guillaume; Dufossé, Fanny; Robert, Yves

    2012-01-01

    We consider a task graph to be executed on a set of processors. We assume that the mapping is given, say by an ordered list of tasks to execute on each processor, and we aim at optimizing the energy consumption while enforcing a prescribed bound on the execution time. While it is not possible to change the allocation of a task, it is possible to change its speed. Rather than using a local approach such as backfilling, we consider the problem as a whole and study the impact of several speed variation models on its complexity. For continuous speeds, we give a closed-form formula for trees and series-parallel graphs, and we cast the problem into a geometric programming problem for general directed acyclic graphs. We show that the classical dynamic voltage and frequency scaling (DVFS) model with discrete modes leads to a NP-complete problem, even if the modes are regularly distributed (an important particular case in practice, which we analyze as the incremental model). On the contrary, the VDD-hopping model lead...

  20. Modeling gene regulatory networks: A network simplification algorithm

    Science.gov (United States)

    Ferreira, Luiz Henrique O.; de Castro, Maria Clicia S.; da Silva, Fabricio A. B.

    2016-12-01

    Boolean networks have been used for some time to model Gene Regulatory Networks (GRNs), which describe cell functions. Those models can help biologists to make predictions, prognosis and even specialized treatment when some disturb on the GRN lead to a sick condition. However, the amount of information related to a GRN can be huge, making the task of inferring its boolean network representation quite a challenge. The method shown here takes into account information about the interactome to build a network, where each node represents a protein, and uses the entropy of each node as a key to reduce the size of the network, allowing the further inferring process to focus only on the main protein hubs, the ones with most potential to interfere in overall network behavior.

  1. ARCHITECTURES AND ALGORITHMS FOR COGNITIVE NETWORKS ENABLED BY QUALITATIVE MODELS

    DEFF Research Database (Denmark)

    Balamuralidhar, P.

    2013-01-01

    the qualitative models in a cognitive engine. Further I use the methodology in multiple functional scenarios of cognitive networks including self- optimization and self- monitoring. In the case of self-optimization, I integrate principles from monotonicity analysis to evaluate and enhance qualitative models......Complexity of communication networks is ever increasing and getting complicated by their heterogeneity and dynamism. Traditional techniques are facing challenges in network performance management. Cognitive networking is an emerging paradigm to make networks more intelligent, thereby overcoming...... traditional limitations and potentially achieving better performance. The vision is that, networks should be able to monitor themselves, reason upon changes in self and environment, act towards the achievement of specific goals and learn from experience. The concept of a Cognitive Engine (CE) supporting...

  2. Basic Research in Digital Stochastic Model Algorithmic Control.

    Science.gov (United States)

    1980-11-01

    Astrom , 1980) approaches. They are, however, not exactly similar since the specification of reference models and the computations of con- trol are done...predictor may be of the state type (Kalman, extended Kalman, etc.), Luenberger observer type, impulse response, or ARMA (Box and Jenkins, 1976 or Astrom ...nonminimum phase systems. Such behavior is not acceptable in applications because of the nonrealizability of infinite inputs ( Astrom , 1970; Astrom and

  3. Computer Model of a "Sense of Humour". I. General Algorithm

    CERN Document Server

    Suslov, I M

    1992-01-01

    A computer model of a "sense of humour" is proposed. The humorous effect is interpreted as a specific malfunction in the course of information processing due to the need for the rapid deletion of the false version transmitted into consciousness. The biological function of a sense of humour consists in speeding up the bringing of information into consciousness and in fuller use of the resources of the brain.

  4. Energy and Uncertainty: Models and Algorithms for Complex Energy Systems

    OpenAIRE

    2014-01-01

    The problem of controlling energy systems (generation, transmission, storage, investment) introduces a number of optimization problems which need to be solved in the presence of different types of uncertainty. We highlight several of these applications, using a simple energy storage problem as a case application. Using this setting, we describe a modeling framework based around five fundamental dimensions which is more natural than the standard canonical form widely used in the reinforcement ...

  5. Convection in a Single Column -- Modelling, Algorithm and Analysis

    CERN Document Server

    Bokhove, Onno; Dedner, Andreas; Esler, Gavin; Norbury, John; Turner, Matthew R; Vanneste, Jacques; Cullen, Mike

    2016-01-01

    The group focused on a model problem of idealised moist air convection in a single column of atmosphere. Height, temperature and moisture variables were chosen to simplify the mathematical representation (along the lines of the Boussinesq approximation in a height variable defined in terms of pressure). This allowed exact simple solutions of the numerical and partial differential equation problems to be found. By examining these, we identify column behaviour, stability issues and explore the feasibility of a more general solution process.

  6. Algorithm To Architecture Mapping Model (ATAMM) multicomputer operating system functional specification

    Science.gov (United States)

    Mielke, R.; Stoughton, J.; Som, S.; Obando, R.; Malekpour, M.; Mandala, B.

    1990-01-01

    A functional description of the ATAMM Multicomputer Operating System is presented. ATAMM (Algorithm to Architecture Mapping Model) is a marked graph model which describes the implementation of large grained, decomposed algorithms on data flow architectures. AMOS, the ATAMM Multicomputer Operating System, is an operating system which implements the ATAMM rules. A first generation version of AMOS which was developed for the Advanced Development Module (ADM) is described. A second generation version of AMOS being developed for the Generic VHSIC Spaceborne Computer (GVSC) is also presented.

  7. A discrete force allocation algorithm for modelling wind turbines in computational fluid dynamics

    DEFF Research Database (Denmark)

    Réthoré, Pierre-Elouan; Sørensen, Niels N.

    2012-01-01

    This paper describes an algorithm for allocating discrete forces in computational fluid dynamics (CFD). Discrete forces are useful in wind energy CFD. They are used as an approximation of the wind turbine blades’ action on the wind (actuator disc/line), to model forests and to model turbulent......, this algorithm does not address the specific cases where discrete forces are present. The velocities and pressure exhibit some significant numerical fluctuations at the position where the body forces are applied. While this issue is limited in space, it is usually critical to accurately estimate the velocity...

  8. Actuator Disc Model Using a Modified Rhie-Chow/SIMPLE Pressure Correction Algorithm

    DEFF Research Database (Denmark)

    Rethore, Pierre-Elouan; Sørensen, Niels

    2008-01-01

    An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known.......An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known....

  9. Optimal approximation of head-related transfer function's pole-zero model based on genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jie; MA Hao; WU Zhen-yang

    2006-01-01

    In the research on spatial hearing and virtual auditory space,it is important to effectively model the head-related transfer functions (HRTFs).Based on the analysis of the HRTFs' spectrum and some perspectives of psychoacoustics,this paper applied multiple demes' parallel and real-valued coding genetic algorithm (GA) to approximate the HRTFs' zero-pole model.Using the logarithmic magnitude's error criterion for the human auditory sense,the results show that the performance of the GA is on the average 39% better than that of the traditional Prony method,and 46% better than that of the Yule-Walker algorithm.

  10. A Contribution to Nyquist-Rate ADC Modeling - Detailed Algorithm Description

    OpenAIRE

    Zidek, J.; Subrt, O.; Valenta, M.; P. Martinek

    2012-01-01

    In this article, the innovative ADC modeling algorithm is described. It is well suitable for nyquist-rate ADC error back annotation. This algorithm is the next step of building a support tool for IC design engineers. The inspiration for us was the work [2]. Here, the ADC behavior is divided into HCF (High Code Frequency) and LCF (Low Code Frequency) separated independent parts. This paper is based on the same concept but the model coefficients are estimated in a different way only from INL da...

  11. Explicit incremental-update algorithm for modeling crystal elasto-viscoplastic response in finite element simulation

    Institute of Scientific and Technical Information of China (English)

    LI Hong-wei; YANG He; SUN Zhi-chao

    2006-01-01

    Computational stability and efficiency are the key problems for numerical modeling of crystal plasticity,which will limit its development and application in finite element (FE) simulation evidently. Since implicit iterative algorithms are inefficient and have difficulty to determine initial values,an explicit incremental-update algorithm for the elasto-viscoplastic constitutive relation was developed in the intermediate frame by using the second Piola-Kirchoff (P-K) stress and Green stain. The increment of stress and slip resistance were solved by a calculation loop of linear equations sets. The reorientation of the crystal as well as the elastic strain can be obtained from a polar decomposition of the elastic deformation gradient. User material subroutine VUMAT was developed to combine crystal elasto-viscoplastic constitutive model with ABAQUS/Explicit. Numerical studies were performed on a cubic upset model with OFHC material (FCC crystal). The comparison of the numerical results with those obtained by implicit iterative algorithm and those from experiments demonstrates that the explicit algorithm is reliable. Furthermore,the effect rules of material anisotropy,rate sensitivity coefficient (RSC) and loading speeds on the deformation were studied. The numerical studies indicate that the explicit algorithm is suitable and efficient for large deformation analyses where anisotropy due to texture is important.

  12. Mixed Algorithms in the Ising Model on Directed BARABÁSI-ALBERT Networks

    Science.gov (United States)

    Lima, F. W. S.

    On directed Barabási-Albert networks with two and seven neighbours selected by each added site, the Ising model does not seem to show a spontaneous magnetisation. Instead, the decay time for flipping of the magnetisation follows an Arrhenius law for Metropolis and Glauber algorithms, but for Wolff cluster flipping the magnetisation decays exponentially with time. On these networks the magnetisation behaviour of the Ising model, with Glauber, HeatBath, Metropolis, Wolf or Swendsen-Wang algorithm competing against Kawasaki dynamics, is studied by Monte Carlo simulations. We show that the model exhibits the phenomenon of self-organisation (= stationary equilibrium) defined in Ref. 8 when Kawasaki dynamics is not dominant in its competition with Glauber, HeatBath and Swendsen-Wang algorithms. Only for Wolff cluster flipping the magnetisation, this phenomenon occurs after an exponentially decay of magnetisation with time. The Metropolis results are independent of competition. We also study the same process of competition described above but with Kawasaki dynamics at the same temperature as the other algorithms. The obtained results are similar for Wolff cluster flipping, Metropolis and Swendsen-Wang algorithms but different for HeatBath.

  13. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  14. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    Science.gov (United States)

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  15. Correlation of Wissler Human Thermal Model Blood Flow and Shiver Algorithms

    Science.gov (United States)

    Bue, Grant; Makinen, Janice; Cognata, Thomas

    2010-01-01

    The Wissler Human Thermal Model (WHTM) is a thermal math model of the human body that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. The model has been shown to predict core temperature and skin temperatures higher and lower, respectively, than in tests of subjects in crew escape suit working in a controlled hot environments. Conversely the model predicts core temperature and skin temperatures lower and higher, respectively, than in tests of lightly clad subjects immersed in cold water conditions. The blood flow algorithms of the model has been investigated to allow for more and less flow, respectively, for the cold and hot case. These changes in the model have yielded better correlation of skin and core temperatures in the cold and hot cases. The algorithm for onset of shiver did not need to be modified to achieve good agreement in cold immersion simulations

  16. Model and algorithm of optimizing alternate traffic restriction scheme in urban traffic network

    Institute of Scientific and Technical Information of China (English)

    徐光明; 史峰; 刘冰; 黄合来

    2014-01-01

    An optimization model and its solution algorithm for alternate traffic restriction (ATR) schemes were introduced in terms of both the restriction districts and the proportion of restricted automobiles. A bi-level programming model was proposed to model the ATR scheme optimization problem by aiming at consumer surplus maximization and overload flow minimization at the upper-level model. At the lower-level model, elastic demand, mode choice and multi-class user equilibrium assignment were synthetically optimized. A genetic algorithm involving prolonging codes was constructed, demonstrating high computing efficiency in that it dynamically includes newly-appearing overload links in the codes so as to reduce the subsequent searching range. Moreover, practical processing approaches were suggested, which may improve the operability of the model-based solutions.

  17. [Determination of Virtual Surgery Mass Point Spring Model Parameters Based on Genetic Algorithms].

    Science.gov (United States)

    Chen, Ying; Hu, Xuyi; Zhu, Qiguang

    2015-12-01

    Mass point-spring model is one of the commonly used models in virtual surgery. However, its model parameters have no clear physical meaning, and it is hard to set the parameter conveniently. We, therefore, proposed a method based on genetic algorithm to determine the mass-spring model parameters. Computer-aided tomography (CAT) data were used to determine the mass value of the particle, and stiffness and damping coefficient were obtained by genetic algorithm. We used the difference between the reference deformation and virtual deformation as the fitness function to get the approximate optimal solution of the model parameters. Experimental results showed that this method could obtain an approximate optimal solution of spring parameters with lower cost, and could accurately reproduce the effect of the actual deformation model as well.

  18. Parameter Estimation for Traffic Noise Models Using a Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Deok-Soon An

    2013-01-01

    Full Text Available A technique has been developed for predicting road traffic noise for environmental assessment, taking into account traffic volume as well as road surface conditions. The ASJ model (ASJ Prediction Model for Road Traffic Noise, 1999, which is based on the sound power level of the noise emitted by the interaction between the road surface and tires, employs regression models for two road surface types: dense-graded asphalt (DGA and permeable asphalt (PA. However, these models are not applicable to other types of road surfaces. Accordingly, this paper introduces a parameter estimation procedure for ASJ-based noise prediction models, utilizing a harmony search (HS algorithm. Traffic noise measurement data for four different vehicle types were used in the algorithm to determine the regression parameters for several road surface types. The parameters of the traffic noise prediction models were evaluated using another measurement set, and good agreement was observed between the predicted and measured sound power levels.

  19. Algorithms and Software for Predictive and Perceptual Modeling of Speech

    CERN Document Server

    Atti, Venkatraman

    2010-01-01

    From the early pulse code modulation-based coders to some of the recent multi-rate wideband speech coding standards, the area of speech coding made several significant strides with an objective to attain high quality of speech at the lowest possible bit rate. This book presents some of the recent advances in linear prediction (LP)-based speech analysis that employ perceptual models for narrow- and wide-band speech coding. The LP analysis-synthesis framework has been successful for speech coding because it fits well the source-system paradigm for speech synthesis. Limitations associated with th

  20. Thrombosis modeling in intracranial aneurysms: a lattice Boltzmann numerical algorithm

    Science.gov (United States)

    Ouared, R.; Chopard, B.; Stahl, B.; Rüfenacht, D. A.; Yilmaz, H.; Courbebaisse, G.

    2008-07-01

    The lattice Boltzmann numerical method is applied to model blood flow (plasma and platelets) and clotting in intracranial aneurysms at a mesoscopic level. The dynamics of blood clotting (thrombosis) is governed by mechanical variations of shear stress near wall that influence platelets-wall interactions. Thrombosis starts and grows below a shear rate threshold, and stops above it. Within this assumption, it is possible to account qualitatively well for partial, full or no occlusion of the aneurysm, and to explain why spontaneous thrombosis is more likely to occur in giant aneurysms than in small or medium sized aneurysms.

  1. Implicit level set algorithms for modelling hydraulic fracture propagation.

    Science.gov (United States)

    Peirce, A

    2016-10-13

    Hydraulic fractures are tensile cracks that propagate in pre-stressed solid media due to the injection of a viscous fluid. Developing numerical schemes to model the propagation of these fractures is particularly challenging due to the degenerate, hypersingular nature of the coupled integro-partial differential equations. These equations typically involve a singular free boundary whose velocity can only be determined by evaluating a distinguished limit. This review paper describes a class of numerical schemes that have been developed to use the multiscale asymptotic behaviour typically encountered near the fracture boundary as multiple physical processes compete to determine the evolution of the fracture. The fundamental concepts of locating the free boundary using the tip asymptotics and imposing the tip asymptotic behaviour in a weak form are illustrated in two quite different formulations of the governing equations. These formulations are the displacement discontinuity boundary integral method and the extended finite-element method. Practical issues are also discussed, including new models for proppant transport able to capture 'tip screen-out'; efficient numerical schemes to solve the coupled nonlinear equations; and fast methods to solve resulting linear systems. Numerical examples are provided to illustrate the performance of the numerical schemes. We conclude the paper with open questions for further research. This article is part of the themed issue 'Energy and the subsurface'.

  2. Databases, models, and algorithms for functional genomics: a bioinformatics perspective.

    Science.gov (United States)

    Singh, Gautam B; Singh, Harkirat

    2005-02-01

    A variety of patterns have been observed on the DNA and protein sequences that serve as control points for gene expression and cellular functions. Owing to the vital role of such patterns discovered on biological sequences, they are generally cataloged and maintained within internationally shared databases. Furthermore,the variability in a family of observed patterns is often represented using computational models in order to facilitate their search within an uncharacterized biological sequence. As the biological data is comprised of a mosaic of sequence-levels motifs, it is significant to unravel the synergies of macromolecular coordination utilized in cell-specific differential synthesis of proteins. This article provides an overview of the various pattern representation methodologies and the surveys the pattern databases available for use to the molecular biologists. Our aim is to describe the principles behind the computational modeling and analysis techniques utilized in bioinformatics research, with the objective of providing insight necessary to better understand and effectively utilize the available databases and analysis tools. We also provide a detailed review of DNA sequence level patterns responsible for structural conformations within the Scaffold or Matrix Attachment Regions (S/MARs).

  3. An Algorithm and Implementation Based on an Agricultural EOQ Model

    Directory of Open Access Journals (Sweden)

    Hu Zhineng

    2015-01-01

    Full Text Available With the improvement of living quality, the agricultural supermarket gradually take the place of the farmers market as the trend. But the agricultural supermarkets’ inappropriate inventory strategies are wasteful and inefficient. So this paper will put forward an inventory strategy for the agricultural supermarkets to lead the conductor decides when and how much to shelve the product. This strategy has significant meaning that it can reduce the loss and get more profit. The research methods are based on the inventory theory and the EOQ model, but the authors add multiple cycles’ theory to them because of the agricultural products’ decreasing characteristics. The research procedures are shown as follows. First, the authors do research in the agricultural supermarket to find their real conduction, and then put forward the new strategy in this paper. Second, the authors found out the model. At last, the authors search the specialty agriculture document to find the data such as the loss rate and the fresh parameters, and solve it out by MATLAB. The numerical result proves that the strategy is better than the real conduction in agricultural supermarket, and it also proves the feasibility.

  4. Genetic algorithm-based multi-objective model for scheduling of linear construction projects

    OpenAIRE

    Senouci, Ahmed B.; Al-Derham, H.R.

    2007-01-01

    This paper presents a genetic algorithm-based multi-objective optimization model for the scheduling of linear construction projects. The model allows construction planners to generate and evaluate optimal/near-optimal construction scheduling plans that minimize both project time and cost. The computations in the present model are organized in three major modules. A scheduling module that develops practical schedules for linear construction projects. A cost module that computes the project's c...

  5. On the Splitting Algorithm Based on Multi-target Model for Image Segmentation

    OpenAIRE

    Yuezhongyi Sun

    2014-01-01

    Against to the different regions of membership functions indicated image in the traditional image segmentation variational model, resulting segmentation is not clear, de-noising effect is not obvious problems, this paper proposes multi-target model for image segmentation and the splitting algorithm. The model uses a sparse regularization method to maintain the boundaries of segmented regions, to overcome the disadvantages of segmentation fuzzy boundaries resulting from total variation regular...

  6. Clustering With Side Information: From a Probabilistic Model to a Deterministic Algorithm

    OpenAIRE

    Khashabi, Daniel; Wieting, John; Liu, Jeffrey Yufei; Liang, Feng

    2015-01-01

    In this paper, we propose a model-based clustering method (TVClust) that robustly incorporates noisy side information as soft-constraints and aims to seek a consensus between side information and the observed data. Our method is based on a nonparametric Bayesian hierarchical model that combines the probabilistic model for the data instance and the one for the side-information. An efficient Gibbs sampling algorithm is proposed for posterior inference. Using the small-variance asymptotics of ou...

  7. A MODEL BASED ALGORITHM FOR FAST DPIV COMPUTING

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Traditional DPIV (Digital Particle Image Velocimetry) methods aremostly based on area-correlation (Willert,C.E.,1991).Though proven to be very time-consuming and very much error prone,they are widely adopted because of they are conceptually simple and easily implemented,and also because there are few alternatives.This paper proposes a non-correlation,conceptually new,fast and efficient approach for DPIV,which takes the nature of flow into consideration.An Incompressible Affined Flow Model (IAFM) is introduced to describe a flow that incorporates rational restraints into the computation.This IAFM,combined with a modified optical flow method-named Total Optical Flow Computation (TOFC),provides a linear system solution to DPIV.Experimental results on real images showed our method to be a very promising approach for DPIV.

  8. Efficient algorithms for multiscale modeling in porous media

    KAUST Repository

    Wheeler, Mary F.

    2010-09-26

    We describe multiscale mortar mixed finite element discretizations for second-order elliptic and nonlinear parabolic equations modeling Darcy flow in porous media. The continuity of flux is imposed via a mortar finite element space on a coarse grid scale, while the equations in the coarse elements (or subdomains) are discretized on a fine grid scale. We discuss the construction of multiscale mortar basis and extend this concept to nonlinear interface operators. We present a multiscale preconditioning strategy to minimize the computational cost associated with construction of the multiscale mortar basis. We also discuss the use of appropriate quadrature rules and approximation spaces to reduce the saddle point system to a cell-centered pressure scheme. In particular, we focus on multiscale mortar multipoint flux approximation method for general hexahedral grids and full tensor permeabilities. Numerical results are presented to verify the accuracy and efficiency of these approaches. © 2010 John Wiley & Sons, Ltd.

  9. Models and Algorithms for Container Vessel Stowage Optimization

    DEFF Research Database (Denmark)

    Delgado-Ortegon, Alberto

    Containerized seaborne trade has played a key role in the transformation of the global economy in the last 50 years. In liner shipping companies, at the heart of this operation, several planning decisions are made based on the stowage capabilities of container vessels, from strategic decisions (e.......g., selection of vessels to buy that satisfy specific demands), through to operational decisions (e.g., selection of containers that optimize revenue, and stowing those containers into a vessel). This thesis addresses the question of whether it is possible to formulate stowage optimization models...... container of those to be loaded in a port should be placed in a vessel, i.e., to generate stowage plans. This thesis explores two different approaches to solve this problem, both follow a 2-phase decomposition that assigns containers to vessel sections in the first phase, i.e., master planning...

  10. Model and Algorithm of BP Neural Network Based on Expanded Multichain Quantum Optimization

    Directory of Open Access Journals (Sweden)

    Baoyu Xu

    2015-01-01

    Full Text Available The model and algorithm of BP neural network optimized by expanded multichain quantum optimization algorithm with super parallel and ultra-high speed are proposed based on the analysis of the research status quo and defects of BP neural network to overcome the defects of overfitting, the random initial weights, and the oscillation of the fitting and generalization ability along with subtle changes of the network parameters. The method optimizes the structure of the neural network effectively and can overcome a series of problems existing in the BP neural network optimized by basic genetic algorithm such as slow convergence speed, premature convergence, and bad computational stability. The performance of the BP neural network controller is further improved. The simulation experimental results show that the model is with good stability, high precision of the extracted parameters, and good real-time performance and adaptability in the actual parameter extraction.

  11. Liver Segmentation Based on Snakes Model and Improved GrowCut Algorithm in Abdominal CT Image

    Science.gov (United States)

    He, Baochun; Ma, Zhiyuan; Zong, Mao; Zhou, Xiangrong; Fujita, Hiroshi

    2013-01-01

    A novel method based on Snakes Model and GrowCut algorithm is proposed to segment liver region in abdominal CT images. First, according to the traditional GrowCut method, a pretreatment process using K-means algorithm is conducted to reduce the running time. Then, the segmentation result of our improved GrowCut approach is used as an initial contour for the future precise segmentation based on Snakes model. At last, several experiments are carried out to demonstrate the performance of our proposed approach and some comparisons are conducted between the traditional GrowCut algorithm. Experimental results show that the improved approach not only has a better robustness and precision but also is more efficient than the traditional GrowCut method. PMID:24066017

  12. LMI-Based Generation of Feedback Laws for a Robust Model Predictive Control Algorithm

    Science.gov (United States)

    Acikmese, Behcet; Carson, John M., III

    2007-01-01

    This technical note provides a mathematical proof of Corollary 1 from the paper 'A Nonlinear Model Predictive Control Algorithm with Proven Robustness and Resolvability' that appeared in the 2006 Proceedings of the American Control Conference. The proof was omitted for brevity in the publication. The paper was based on algorithms developed for the FY2005 R&TD (Research and Technology Development) project for Small-body Guidance, Navigation, and Control [2].The framework established by the Corollary is for a robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems that guarantees the resolvability of the associated nite-horizon optimal control problem in a receding-horizon implementation. Additional details of the framework are available in the publication.

  13. Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models

    Directory of Open Access Journals (Sweden)

    Hiroki Yoshioka

    2011-07-01

    Full Text Available The fraction of vegetation cover (FVC is often estimated by unmixing a linear mixture model (LMM to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could be used to assess the robustness of LMM-based algorithms under a two-endmember assumption. The factor was derived from the analytical relationship between FVC values determined according to several previously described algorithms. The factor depended on the target spectra, endmember spectra, and choice of the spectral vegetation index. Numerical simulations were conducted to demonstrate the dependence and usefulness of the technique in terms of robustness against the measurement noise.

  14. A hand tracking algorithm with particle filter and improved GVF snake model

    Science.gov (United States)

    Sun, Yi-qi; Wu, Ai-guo; Dong, Na; Shao, Yi-zhe

    2017-07-01

    To solve the problem that the accurate information of hand cannot be obtained by particle filter, a hand tracking algorithm based on particle filter combined with skin-color adaptive gradient vector flow (GVF) snake model is proposed. Adaptive GVF and skin color adaptive external guidance force are introduced to the traditional GVF snake model, guiding the curve to quickly converge to the deep concave region of hand contour and obtaining the complex hand contour accurately. This algorithm realizes a real-time correction of the particle filter parameters, avoiding the particle drift phenomenon. Experimental results show that the proposed algorithm can reduce the root mean square error of the hand tracking by 53%, and improve the accuracy of hand tracking in the case of complex and moving background, even with a large range of occlusion.

  15. Proposing an Algorithm for R&Q Inventory Control Model with Stochastic Demand Influenced by Shortage

    Directory of Open Access Journals (Sweden)

    Parviz fattahi

    2013-08-01

    Full Text Available In this article, the continuous - review inventory control system has been studied. A new constraint of demand dependent on the average percent of product shortage has been added to the problem. It means that the average demand has a direct relationship with shortage in a period. This constraint, which is related to the costs of credit loss of the organization due to product shortage, has been considered in the inventory model. In this paper, the mathematical model of this problem has been presented and then, two heuristic approaches based on the genetic and simulated annealing algorithms are developed. Computational results indicate that the simulated annealing algorithm can provide better results compare to the genetic algorithm.

  16. Using genetic algorithm based simulated annealing penalty function to solve groundwater management model

    Institute of Scientific and Technical Information of China (English)

    吴剑锋; 朱学愚; 刘建立

    1999-01-01

    The genetic algorithm (GA) is a global and random search procedure based on the mechanics of natural selection and natural genetics. A new optimization method of the genetic algorithm-based simulated annealing penalty function (GASAPF) is presented to solve groundwater management model. Compared with the traditional gradient-based algorithms, the GA is straightforward and there is no need to calculate derivatives of the objective function. The GA is able to generate both convex and nonconvex points within the feasible region. It can be sure that the GA converges to the global or at least near-global optimal solution to handle the constraints by simulated annealing technique. Maximum pumping example results show that the GASAPF to solve optimization model is very efficient and robust.

  17. Implementation of a combined algorithm designed to increase the reliability of information systems: simulation modeling

    Science.gov (United States)

    Popov, A.; Zolotarev, V.; Bychkov, S.

    2016-11-01

    This paper examines the results of experimental studies of a previously submitted combined algorithm designed to increase the reliability of information systems. The data that illustrates the organization and conduct of the studies is provided. Within the framework of a comparison of As a part of the study conducted, the comparison of the experimental data of simulation modeling and the data of the functioning of the real information system was made. The hypothesis of the homogeneity of the logical structure of the information systems was formulated, thus enabling to reconfigure the algorithm presented, - more specifically, to transform it into the model for the analysis and prediction of arbitrary information systems. The results presented can be used for further research in this direction. The data of the opportunity to predict the functioning of the information systems can be used for strategic and economic planning. The algorithm can be used as a means for providing information security.

  18. A comparison of two estimation algorithms for Samejima's continuous IRT model.

    Science.gov (United States)

    Zopluoglu, Cengiz

    2013-03-01

    This study compares two algorithms, as implemented in two different computer softwares, that have appeared in the literature for estimating item parameters of Samejima's continuous response model (CRM) in a simulation environment. In addition to the simulation study, a real-data illustration is provided, and CRM is used as a potential psychometric tool for analyzing measurement outcomes in the context of curriculum-based measurement (CBM) in the field of education. The results indicate that a simplified expectation-maximization (EM) algorithm is as effective and efficient as the traditional EM algorithm for estimating the CRM item parameters. The results also show promise for using this psychometric model to analyze CBM outcomes, although more research is needed in order to recommend CRM as a standard practice in the CBM context.

  19. A Collaborative Secure Localization Algorithm Based on Trust Model in Underwater Wireless Sensor Networks.

    Science.gov (United States)

    Han, Guangjie; Liu, Li; Jiang, Jinfang; Shu, Lei; Rodrigues, Joel J P C

    2016-02-16

    Localization is one of the hottest research topics in Underwater Wireless Sensor Networks (UWSNs), since many important applications of UWSNs, e.g., event sensing, target tracking and monitoring, require location information of sensor nodes. Nowadays, a large number of localization algorithms have been proposed for UWSNs. How to improve location accuracy are well studied. However, few of them take location reliability or security into consideration. In this paper, we propose a Collaborative Secure Localization algorithm based on Trust model (CSLT) for UWSNs to ensure location security. Based on the trust model, the secure localization process can be divided into the following five sub-processes: trust evaluation of anchor nodes, initial localization of unknown nodes, trust evaluation of reference nodes, selection of reference node, and secondary localization of unknown node. Simulation results demonstrate that the proposed CSLT algorithm performs better than the compared related works in terms of location security, average localization accuracy and localization ratio.

  20. Parameter Identification of the 2-Chlorophenol Oxidation Model Using Improved Differential Search Algorithm

    Directory of Open Access Journals (Sweden)

    Guang-zhou Chen

    2015-01-01

    Full Text Available Parameter identification plays a crucial role for simulating and using model. This paper firstly carried out the sensitivity analysis of the 2-chlorophenol oxidation model in supercritical water using the Monte Carlo method. Then, to address the nonlinearity of the model, two improved differential search (DS algorithms were proposed to carry out the parameter identification of the model. One strategy is to adopt the Latin hypercube sampling method to replace the uniform distribution of initial population; the other is to combine DS with simplex method. The results of sensitivity analysis reveal the sensitivity and the degree of difficulty identified for every model parameter. Furthermore, the posteriori probability distribution of parameters and the collaborative relationship between any two parameters can be obtained. To verify the effectiveness of the improved algorithms, the optimization performance of improved DS in kinetic parameter estimation is studied and compared with that of the basic DS algorithm, differential evolution, artificial bee colony optimization, and quantum-behaved particle swarm optimization. And the experimental results demonstrate that the DS with the Latin hypercube sampling method does not present better performance, while the hybrid methods have the advantages of strong global search ability and local search ability and are more effective than the other algorithms.

  1. Hybrid Swarm Algorithms for Parameter Identification of an Actuator Model in an Electrical Machine

    Directory of Open Access Journals (Sweden)

    Ying Wu

    2011-01-01

    Full Text Available Efficient identification and control algorithms are needed, when active vibration suppression techniques are developed for industrial machines. In the paper a new actuator for reducing rotor vibrations in electrical machines is investigated. Model-based control is needed in designing the algorithm for voltage input, and therefore proper models for the actuator must be available. In addition to the traditional prediction error method a new knowledge-based Artificial Fish-Swarm optimization algorithm (AFA with crossover, CAFAC, is proposed to identify the parameters in the new model. Then, in order to obtain a fast convergence of the algorithm in the case of a 30 kW two-pole squirrel cage induction motor, we combine the CAFAC and Particle Swarm Optimization (PSO to identify parameters of the machine to construct a linear time-invariant(LTI state-space model. Besides that, the prediction error method (PEM is also employed to identify the induction motor to produce a black box model with correspondence to input-output measurements.

  2. RECONFIGURABLE PRODUCTION LINE MODELING AND SCHEDULING USING PETRI NETS AND GENETIC ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    XIE Nan; LI Aiping

    2006-01-01

    In response to the production capacity and functionality variations, a genetic algorithm (GA) embedded with deterministic timed Petri nets(DTPN) for reconfigurable production line(RPL) is proposed to solve its scheduling problem. The basic DTPN modules are presented to model the corresponding variable structures in RPL, and then the scheduling model of the whole RPL is constructed. And in the scheduling algorithm, firing sequences of the Petri nets model are used as chromosomes, thus the selection, crossover, and mutation operator do not deal with the elements in the problem space, but the elements of Petri nets model. Accordingly, all the algorithms for GA operations embedded with Petri nets model are proposed. Moreover, the new weighted single-objective optimization based on reconfiguration cost and E/T is used. The results of a DC motor RPL scheduling suggest that the presented DTPN-GA scheduling algorithm has a significant impact on RPL scheduling, and provide obvious improvements over the conventional scheduling method in practice that meets duedate, minimizes reconfiguration cost, and enhances cost effectivity.

  3. Generalized linear model for mapping discrete trait loci implemented with LASSO algorithm.

    Directory of Open Access Journals (Sweden)

    Jun Xing

    Full Text Available Generalized estimating equation (GEE algorithm under a heterogeneous residual variance model is an extension of the iteratively reweighted least squares (IRLS method for continuous traits to discrete traits. In contrast to mixture model-based expectation-maximization (EM algorithm, the GEE algorithm can well detect quantitative trait locus (QTL, especially large effect QTLs located in large marker intervals in the manner of high computing speed. Based on a single QTL model, however, the GEE algorithm has very limited statistical power to detect multiple QTLs because of ignoring other linked QTLs. In this study, the fast least absolute shrinkage and selection operator (LASSO is derived for generalized linear model (GLM with all possible link functions. Under a heterogeneous residual variance model, the LASSO for GLM is used to iteratively estimate the non-zero genetic effects of those loci over entire genome. The iteratively reweighted LASSO is therefore extended to mapping QTL for discrete traits, such as ordinal, binary, and Poisson traits. The simulated and real data analyses are conducted to demonstrate the efficiency of the proposed method to simultaneously identify multiple QTLs for binary and Poisson traits as examples.

  4. Modelling Kara Sea phytoplankton primary production: Development and skill assessment of regional algorithms

    Science.gov (United States)

    Demidov, Andrey B.; Kopelevich, Oleg V.; Mosharov, Sergey A.; Sheberstov, Sergey V.; Vazyulya, Svetlana V.

    2017-07-01

    Empirical region-specific (RSM), depth-integrated (DIM) and depth-resolved (DRM) primary production models are developed based on data from the Kara Sea during the autumn (September-October 1993, 2007, 2011). The model is validated by using field and satellite (MODIS-Aqua) observations. Our findings suggest that RSM algorithms perform better than non-region-specific algorithms (NRSM) in terms of regression analysis, root-mean-square difference (RMSD) and model efficiency. In general, the RSM and NRSM underestimate or overestimate the in situ water column integrated primary production (IPP) by a factor of 2 and 2.8, respectively. Additionally, our results suggest that the model skill of the RSM increases when the chlorophyll specific carbon fixation rate, efficiency of photosynthesis and photosynthetically available radiation (PAR) are used as input variables. The parameterization of chlorophyll (chl a) vertical profiles is performed in Kara Sea waters with different trophic statuses. Model validation with field data suggests that the DIM and DRM algorithms perform equally (RMSD of 0.29 and 0.31, respectively). No changes in the performance of the DIM and DRM algorithms are observed (RMSD of 0.30 and 0.31, respectively) when satellite-derived chl a, PAR and the diffuse attenuation coefficient (Kd) are applied as input variables.

  5. Universal triple I fuzzy reasoning algorithm of function model based on quotient space

    Institute of Scientific and Technical Information of China (English)

    Lu Qiang; Shen Guanting; and Liu Xiaoping

    2012-01-01

    Aiming at the deficiencies of analysis capacity from different levels and fuzzy treating method in product function modeling of conceptual design, the theory of quotient space and universal triple I fuzzy reasoning method are introduced, and then the function modeling algorithm based on the universal triple I fuzzy reasoning method is proposed. Firstly, the product function granular model based on the quotient space theory is built, with its function granular representation and computing rules defined at the same time. Secondly, in order to quickly achieve function granular model from function requirement, the function modeling method based on universal triple I fuzzy reasoning is put forward. Within the fuzzy reasoning of universal triple I method, the small-distance-activating method is proposed as the kernel of fuzzy reasoning; how to change function requirements to fuzzy ones, fuzzy computing methods, and strategy of fuzzy reasoning are respectively investigated as well; the function modeling algorithm based on the universal triple I fuzzy reasoning method is achieved. Lastly, the validity of the function granular model and function modeling algorithm is validated. Through our method, the reasonable function granular model can be quickly achieved from function requirements, and the fuzzy character of conceptual design can be well handled, which greatly improves conceptual design.

  6. Extension of the SAEM algorithm for nonlinear mixed models with 2 levels of random effects.

    Science.gov (United States)

    Panhard, Xavière; Samson, Adeline

    2009-01-01

    This article focuses on parameter estimation of multilevel nonlinear mixed-effects models (MNLMEMs). These models are used to analyze data presenting multiple hierarchical levels of grouping (cluster data, clinical trials with several observation periods, ...). The variability of the individual parameters of the regression function is thus decomposed as a between-subject variability and higher levels of variability (e.g. within-subject variability). We propose maximum likelihood estimates of parameters of those MNLMEMs with 2 levels of random effects, using an extension of the stochastic approximation version of expectation-maximization (SAEM)-Monte Carlo Markov chain algorithm. The extended SAEM algorithm is split into an explicit direct expectation-maximization (EM) algorithm and a stochastic EM part. Compared to the original algorithm, additional sufficient statistics have to be approximated by relying on the conditional distribution of the second level of random effects. This estimation method is evaluated on pharmacokinetic crossover simulated trials, mimicking theophylline concentration data. Results obtained on those data sets with either the SAEM algorithm or the first-order conditional estimates (FOCE) algorithm (implemented in the nlme function of R software) are compared: biases and root mean square errors of almost all the SAEM estimates are smaller than the FOCE ones. Finally, we apply the extended SAEM algorithm to analyze the pharmacokinetic interaction of tenofovir on atazanavir, a novel protease inhibitor, from the Agence Nationale de Recherche sur le Sida 107-Puzzle 2 study. A significant decrease of the area under the curve of atazanavir is found in patients receiving both treatments.

  7. A simple algorithm to estimate genetic variance in an animal threshold model using Bayesian inference

    Directory of Open Access Journals (Sweden)

    Heringstad Bjørg

    2010-07-01

    Full Text Available Abstract Background In the genetic analysis of binary traits with one observation per animal, animal threshold models frequently give biased heritability estimates. In some cases, this problem can be circumvented by fitting sire- or sire-dam models. However, these models are not appropriate in cases where individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (covariance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative" or "non-informative" with respect to genetic (covariance components. The "non-informative" individuals are characterized by their Mendelian sampling deviations (deviance from the mid-parent mean being completely confounded with a single residual on the underlying liability scale. For threshold models, residual variance on the underlying scale is not identifiable. Hence, variance of fully confounded Mendelian sampling deviations cannot be identified either, but can be inferred from the between-family variation. In the new algorithm, breeding values are sampled as in a standard animal model using the full relationship matrix, but genetic (covariance components are inferred from the sampled breeding values and relationships between "informative" individuals (usually parents only. The latter is analogous to a sire-dam model (in cases with no individual records on the parents. Results When applied to simulated data sets, the standard animal threshold model failed to produce useful results since samples of genetic variance always drifted towards infinity, while the new algorithm produced proper parameter estimates essentially identical to the results from a sire-dam model (given the fact that no individual records exist for the parents. Furthermore, the new algorithm showed much faster Markov chain mixing properties for genetic parameters (similar to

  8. A Novel OBDD-Based Reliability Evaluation Algorithm for Wireless Sensor Networks on the Multicast Model

    Directory of Open Access Journals (Sweden)

    Zongshuai Yan

    2015-01-01

    Full Text Available The two-terminal reliability calculation for wireless sensor networks (WSNs is a #P-hard problem. The reliability calculation of WSNs on the multicast model provides an even worse combinatorial explosion of node states with respect to the calculation of WSNs on the unicast model; many real WSNs require the multicast model to deliver information. This research first provides a formal definition for the WSN on the multicast model. Next, a symbolic OBDD_Multicast algorithm is proposed to evaluate the reliability of WSNs on the multicast model. Furthermore, our research on OBDD_Multicast construction avoids the problem of invalid expansion, which reduces the number of subnetworks by identifying the redundant paths of two adjacent nodes and s-t unconnected paths. Experiments show that the OBDD_Multicast both reduces the complexity of the WSN reliability analysis and has a lower running time than Xing’s OBDD- (ordered binary decision diagram- based algorithm.

  9. Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization

    Science.gov (United States)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.

  10. Semi-Implicit Algorithm for Elastoplastic Damage Models Involving Energy Integration

    Directory of Open Access Journals (Sweden)

    Ji Zhang

    2016-01-01

    Full Text Available This study aims to develop a semi-implicit constitutive integration algorithm for a class of elastoplastic damage models where calculation of damage energy release rates involves integration of free energy. The constitutive equations with energy integration are split into the elastic predictor, plastic corrector, and damage corrector. The plastic corrector is solved with an improved format of the semi-implicit spectral return mapping, which is characterized by constant flow direction and plastic moduli calculated at initial yield, enforcement of consistency at the end, and coordinate-independent formulation with an orthogonally similar stress tensor. The tangent stiffness consistent with the updating algorithm is derived. The algorithm is implemented with a recently proposed elastoplastic damage model for concrete, and several typical mechanical tests of reinforced concrete components are simulated. The present semi-implicit algorithm proves to achieve a balance between accuracy, stability, and efficiency compared with the implicit and explicit algorithms and calculate free energy accurately with small time steps.

  11. Validation and application of modeling algorithms for the design of molecularly imprinted polymers.

    Science.gov (United States)

    Liu, Bing; Ou, Lulu; Zhang, Fuyuan; Zhang, Zhijun; Li, Hongying; Zhu, Mengyu; Wang, Shuo

    2014-12-01

    In the study, four different semiempirical algorithms, modified neglect of diatomic overlap, a reparameterization of Austin Model 1, complete neglect of differential overlap and typed neglect of differential overlap, have been applied for the energy optimization of template, monomer, and template-monomer complexes of imprinted polymers. For phosmet-, estrone-, and metolcarb-imprinted polymers, the binding energies of template-monomer complexes were calculated and the docking configures were assessed in different molar ratio of template/monomer. It was found that two algorithms were not suitable for calculating the binding energy in template-monomers complex system. For the other algorithms, the obtained optimum molar ratio of template and monomers were consistent with the experimental results. Therefore, two algorithms have been selected and applied for the preparation of enrofloxacin-imprinted polymers. Meanwhile using a different molar ratio of template and monomer, we prepared imprinted polymers and nonimprinted polymers, and evaluated the adsorption to template. It was verified that the experimental results were in good agreement with the modeling results. As a result, the semiempirical algorithm had certain feasibility in designing the preparation of imprinted polymers.

  12. Estimating the ratios of the stationary distribution values for Markov chains modeling evolutionary algorithms.

    Science.gov (United States)

    Mitavskiy, Boris; Cannings, Chris

    2009-01-01

    The evolutionary algorithm stochastic process is well-known to be Markovian. These have been under investigation in much of the theoretical evolutionary computing research. When the mutation rate is positive, the Markov chain modeling of an evolutionary algorithm is irreducible and, therefore, has a unique stationary distribution. Rather little is known about the stationary distribution. In fact, the only quantitative facts established so far tell us that the stationary distributions of Markov chains modeling evolutionary algorithms concentrate on uniform populations (i.e., those populations consisting of a repeated copy of the same individual). At the same time, knowing the stationary distribution may provide some information about the expected time it takes for the algorithm to reach a certain solution, assessment of the biases due to recombination and selection, and is of importance in population genetics to assess what is called a "genetic load" (see the introduction for more details). In the recent joint works of the first author, some bounds have been established on the rates at which the stationary distribution concentrates on the uniform populations. The primary tool used in these papers is the "quotient construction" method. It turns out that the quotient construction method can be exploited to derive much more informative bounds on ratios of the stationary distribution values of various subsets of the state space. In fact, some of the bounds obtained in the current work are expressed in terms of the parameters involved in all the three main stages of an evolutionary algorithm: namely, selection, recombination, and mutation.

  13. Locally linear manifold model for gap-filling algorithms of hyperspectral imagery: Proposed algorithms and a comparative study

    Science.gov (United States)

    Suliman, Suha Ibrahim

    Landsat 7 Enhanced Thematic Mapper Plus (ETM+) Scan Line Corrector (SLC) device, which corrects for the satellite motion, has failed since May 2003 resulting in a loss of about 22% of the data. To improve the reconstruction of Landsat 7 SLC-off images, Locally Linear Manifold (LLM) model is proposed for filling gaps in hyperspectral imagery. In this approach, each spectral band is modeled as a non-linear locally affine manifold that can be learned from the matching bands at different time instances. Moreover, each band is divided into small overlapping spatial patches. In particular, each patch is considered to be a linear combination (approximately on an affine space) of a set of corresponding patches from the same location that are adjacent in time or from the same season of the year. Fill patches are selected from Landsat 5 Thematic Mapper (TM) products of the year 1984 through 2011 which have similar spatial and radiometric resolution as Landsat 7 products. Using this approach, the gap-filling process involves feasible point on the learned manifold to approximate the missing pixels. The proposed LLM framework is compared to some existing single-source (Average and Inverse Distance Weight (IDW)) and multi- source (Local Linear Histogram Matching (LLHM) and Adaptive Window Linear Histogram Matching (AWLHM)) gap-filling methodologies. We analyze the effectiveness of the proposed LLM approach through simulation examples with known ground-truth. It is shown that the LLM-model driven approach outperforms all existing recovery methods considered in this study. The superiority of LLM is illustrated by providing better reconstructed images with higher accuracy even over heterogeneous landscape. Moreover, it is relatively simple to realize algorithmically, and it needs much less computing time when compared to the state- of-the art AWLHM approach.

  14. Predicting Modeling Method of Ship Radiated Noise Based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Guohui Li

    2016-01-01

    Full Text Available Because the forming mechanism of underwater acoustic signal is complex, it is difficult to establish the accurate predicting model. In this paper, we propose a nonlinear predicting modeling method of ship radiated noise based on genetic algorithm. Three types of ship radiated noise are taken as real underwater acoustic signal. First of all, a basic model framework is chosen. Secondly, each possible model is done with genetic coding. Thirdly, model evaluation standard is established. Fourthly, the operation of genetic algorithm such as crossover, reproduction, and mutation is designed. Finally, a prediction model of real underwater acoustic signal is established by genetic algorithm. By calculating the root mean square error and signal error ratio of underwater acoustic signal predicting model, the satisfactory results are obtained. The results show that the proposed method can establish the accurate predicting model with high prediction accuracy and may play an important role in the further processing of underwater acoustic signal such as noise reduction and feature extraction and classification.

  15. Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm

    Science.gov (United States)

    Huang, Yanhua; Gu, Lizhi

    2015-09-01

    The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and

  16. Statistical Algorithms for Models in State Space Using SsfPack 2.2

    NARCIS (Netherlands)

    Koopman, S.J.M.; Shephard, N.; Doornik, J.A.

    1998-01-01

    This paper discusses and documents the algorithms of SsfPack 2.2. SsfPack is a suite of C routines for carrying out computations involving the statistical analysis of univariate and multivariate models in state space form. The emphasis is on documenting the link we have made to the Ox computing envi

  17. Statistical Algorithms for Models in State Space Using SsfPack 2.2

    NARCIS (Netherlands)

    Koopman, S.J.M.; Shephard, N.; Doornik, J.A.

    1998-01-01

    This paper discusses and documents the algorithms of SsfPack 2.2. SsfPack is a suite of C routines for carrying out computations involving the statistical analysis of univariate and multivariate models in state space form. The emphasis is on documenting the link we have made to the Ox computing envi

  18. Dynamic Vehicle Routing for Robotic Networks: Models, Fundamental Limitations and Algorithms

    Science.gov (United States)

    2010-04-16

    partitions. SIAM Review, January 2010. Submitted Francesco Bullo (UCSB) Dynamic Vehicle Routing 16apr10 @ ARL 31 / 34 Gossip partitioning policy: sample...Control Conference, Hollywood, CA, October 2009 Francesco Bullo (UCSB) Dynamic Vehicle Routing 16apr10 @ ARL 32 / 34 Gossip partitioning policy: analysis...Dynamic Vehicle Routing for Robotic Networks: Models, Fundamental Limitations and Algorithms Francesco Bullo Center for Control, Dynamical Systems

  19. Special issue on algorithms and models for the web graph (preface)

    NARCIS (Netherlands)

    Avrachenkov, Konstatin; Donato, Debora; Litvak, Nelly

    2009-01-01

    This special issue of Internet Mathematics is dedicated to the 6th International Workshop on Algorithms and Models for the Web Graph (WAW 2009), held at Barcelona, Spain, on February 12–13, 2009. The workshop has reported state-of-the-art achievements in the analysis of the World Wide Web and online

  20. Metaheuristic Algorithm for Solving Biobjective Possibility Planning Model of Location-Allocation in Disaster Relief Logistics

    Directory of Open Access Journals (Sweden)

    Farnaz Barzinpour

    2014-01-01

    Full Text Available Thousands of victims and millions of affected people are hurt by natural disasters every year. Therefore, it is essential to prepare proper response programs that consider early activities of disaster management. In this paper, a multiobjective model for distribution centers which are located and allocated periodically to the damaged areas in order to distribute relief commodities is offered. The main objectives of this model are minimizing the total costs and maximizing the least rate of the satisfaction in the sense of being fair while distributing the items. The model simultaneously determines the location of relief distribution centers and the allocation of affected areas to relief distribution centers. Furthermore, an efficient solution approach based on genetic algorithm has been developed in order to solve the proposed mathematical model. The results of genetic algorithm are compared with the results provided by simulated annealing algorithm and LINGO software. The computational results show that the proposed genetic algorithm provides relatively good solutions in a reasonable time.

  1. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks.

    Directory of Open Access Journals (Sweden)

    Ting Wang

    2016-02-01

    Full Text Available Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM, a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM".

  2. Modelling of an hydraulic excavator using simplified refined instrumental variable(SRIV)algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Instead of establishing mathematical hydraulic system models from physical laws usually done with the problems of complex modelling processes,low reliability and practicality caused by large uncertainties,a novel modelling method for a highly nonlinear system of a hydraulic excavator is presented.Based on the data collected in the excavator's arms driving experiments,a data-based excavator dynamic model using Simplified Refined Instrumental Variable(SRIV)identification and estimation algorithms is established.The validity of the proposed data-based model is indirectly demonstrated by the performance of computer simulation and the real machine motion control experiments.

  3. [Study on the Application of NAS-Based Algorithm in the NIR Model Optimization].

    Science.gov (United States)

    Geng, Ying; Xiang, Bing-ren; He, Lan

    2015-10-01

    In this paper, net analysis signal (NAS)-based concept was introduced to the analysis of multi-component Ginkgo biloba leaf extracts. NAS algorithm was utilized for the preprocessing of spectra, and NAS-based two-dimensional correlation analysis was used for the optimization of NIR model building. Simultaneous quantitative models for three flavonol aglycones: quercetin, keampferol and isorhamnetin were established respectively. The NAS vectors calculated using two algorithms introduced from Lorber and Goicoechea and Olivieri (HLA/GO) were applied in the development of calibration models, the reconstructed spectra were used as input of PLS modeling. For the first time, NAS-based two-dimensional correlation spectroscopy was used for wave number selection. The regions appeared in the main diagonal were selected as useful regions for model building. The results implied that two NAS-based preprocessing methods were successfully used for the analysis of quercetin, keampferol and isorhamnetin with a decrease of factor number and an improvement of model robustness. NAS-based algorithm was proven to be a useful tool for the preprocessing of spectra and for optimization of model calibration. The above research showed a practical application value for the NIRS in the analysis of complex multi-component petrochemical medicine with unknown interference.

  4. Application of stochastic weighted algorithms to a multidimensional silica particle model

    Energy Technology Data Exchange (ETDEWEB)

    Menz, William J. [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom); Patterson, Robert I.A.; Wagner, Wolfgang [Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstrasse 39, Berlin 10117 (Germany); Kraft, Markus, E-mail: mk306@cam.ac.uk [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom)

    2013-09-01

    Highlights: •Stochastic weighted algorithms (SWAs) are developed for a detailed silica model. •An implementation of SWAs with the transition kernel is presented. •The SWAs’ solutions converge to the direct simulation algorithm’s (DSA) solution. •The efficiency of SWAs is evaluated for this multidimensional particle model. •It is shown that SWAs can be used for coagulation problems in industrial systems. -- Abstract: This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associated majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83–98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.

  5. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  6. Modeling Signal Transduction Networks: A comparison of two Stochastic Kinetic Simulation Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Pettigrew, Michel F.; Resat, Haluk

    2005-09-15

    Simulations of a scalable four compartment reaction model based on the well known epidermal growth factor receptor (EGFR) signal transduction system are used to compare two stochastic algorithms ? StochSim and the Gibson-Gillespie. It is concluded that the Gibson-Gillespie is the algorithm of choice for most realistic cases with the possible exception of signal transduction networks characterized by a moderate number (< 100) of complex types, each with a very small population, but with a high degree of connectivity amongst the complex types. Keywords: Signal transduction networks, Stochastic simulation, StochSim, Gillespie

  7. Numerical study of variational data assimilation algorithms based on decomposition methods in atmospheric chemistry models

    Science.gov (United States)

    Penenko, Alexey; Antokhin, Pavel

    2016-11-01

    The performance of a variational data assimilation algorithm for a transport and transformation model of atmospheric chemical composition is studied numerically in the case where the emission inventories are missing while there are additional in situ indirect concentration measurements. The algorithm is based on decomposition and splitting methods with a direct solution of the data assimilation problems at the splitting stages. This design allows avoiding iterative processes and working in real-time. In numerical experiments we study the sensitivity of data assimilation to measurement data quantity and quality.

  8. Extraction of battery parameters of the equivalent circuit model using a multi-objective genetic algorithm

    Science.gov (United States)

    Brand, Jonathan; Zhang, Zheming; Agarwal, Ramesh K.

    2014-02-01

    A simple but reasonably accurate battery model is required for simulating the performance of electrical systems that employ a battery for example an electric vehicle, as well as for investigating their potential as an energy storage device. In this paper, a relatively simple equivalent circuit based model is employed for modeling the performance of a battery. A computer code utilizing a multi-objective genetic algorithm is developed for the purpose of extracting the battery performance parameters. The code is applied to several existing industrial batteries as well as to two recently proposed high performance batteries which are currently in early research and development stage. The results demonstrate that with the optimally extracted performance parameters, the equivalent circuit based battery model can accurately predict the performance of various batteries of different sizes, capacities, and materials. Several test cases demonstrate that the multi-objective genetic algorithm can serve as a robust and reliable tool for extracting the battery performance parameters.

  9. Location Model and Optimization of Seaborne Petroleum Logistics Distribution Center Based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Chu-Liangyong

    2013-06-01

    Full Text Available The network of Chinese Waterborne Petroleum Logistics (CWPL is so complex that reasonably disposing and choosing Chinese Waterborne Petroleum Logistics Distribution Center (CWPLDC take on the important theory value and the practical significance. In the study, the network construct of CWPL distribution is provided and the corresponding mathematical model for locating CWPLDC is established, which is a nonlinear mixed interger model. In view of the nonlinerar programming characteristic of model, the genetic algorithm as the solution strategy is put forward here, the strategies of hybrid coding, constraint elimination , fitness function and genetic operator are given followed the algorithm. The result indicates that this model is effective and reliable. This method could also be applicable for other types of large-scale logistics distribution center optimization.

  10. Nonequilibrium behaviors of the three-dimensional Heisenberg model in the Swendsen-Wang algorithm.

    Science.gov (United States)

    Nonomura, Yoshihiko; Tomita, Yusuke

    2016-01-01

    Recently, it was shown [Y. Nonomura, J. Phys. Soc. Jpn. 83, 113001 (2014)JUPSAU0031-901510.7566/JPSJ.83.113001] that the nonequilibrium critical relaxation of the two-dimensional (2D) Ising model from a perfectly ordered state in the Wolff algorithm is described by stretched-exponential decay, and a universal scaling scheme was found to connect nonequilibrium and equilibrium behaviors. In the present study we extend these findings to vector spin models, and the 3D Heisenberg model could be a typical example. To evaluate the critical temperature and critical exponents precisely using the above scaling scheme, we calculate nonequilibrium ordering from the perfectly disordered state in the Swendsen-Wang algorithm, and we find that the critical ordering process is described by stretched-exponential growth with a comparable exponent to that of the 3D XY model. The critical exponents evaluated in the present study are consistent with those in previous studies.

  11. Generic Energy Matching Model and Figure of Matching Algorithm for Combined Renewable Energy Systems

    Directory of Open Access Journals (Sweden)

    J.C. Brezet

    2009-08-01

    Full Text Available In this paper the Energy Matching Model and Figure of Matching Algorithm which originally was dedicated only to photovoltaic (PV systems [1] are extended towards a Model and Algorithm suitable for combined systems which are a result of integration of two or more renewable energy sources into one. The systems under investigation will range from mobile portable devices up to the large renewable energy system conceivably to be applied at the Afsluitdijk (Closure- dike in the north of the Netherlands. This Afsluitdijk is the major dam in the Netherlands, damming off the Zuiderzee, a salt water inlet of the North Sea and turning it into the fresh water lake of the IJsselmeer. The energy chain of power supplies based on a combination of renewable energy sources can be modeled by using one generic Energy Matching Model as starting point.

  12. A Worm Algorithm for the Lattice CP(N-1) Model arXiv

    CERN Document Server

    Rindlisbacher, Tobias

    The CP(N-1) model in 2D is an interesting toy model for 4D QCD as it possesses confinement, asymptotic freedom and a non-trivial vacuum structure. Due to the lower dimensionality and the absence of fermions, the computational cost for simulating 2D CP(N-1) on the lattice is much lower than the one for simulating 4D QCD. However to our knowledge, no efficient algorithm for simulating the lattice CP(N-1) model has been tested so far, which also works at finite density. To this end we propose and test a new type of worm algorithm which is appropriate to simulate the lattice CP(N-1) model in a dual, flux-variables based representation, in which the introduction of a chemical potential does not give rise to any complications.

  13. Synthetic Optimization Model and Algorithm for Railway Freight Center Station Location and Wagon Flow Organization Problem

    Directory of Open Access Journals (Sweden)

    Xing-cai Liu

    2014-01-01

    Full Text Available The railway freight center stations location and wagon flow organization in railway transport are interconnected, and each of them is complicated in a large-scale rail network. In this paper, a two-stage method is proposed to optimize railway freight center stations location and wagon flow organization together. The location model is present with the objective to minimize the operation cost and fixed construction cost. Then, the second model of wagon flow organization is proposed to decide the optimal train service between different freight center stations. The location of the stations is the output of the first model. A heuristic algorithm that combined tabu search (TS with adaptive clonal selection algorithm (ACSA is proposed to solve those two models. The numerical results show the proposed solution method is effective.

  14. Nonequilibrium behaviors of the three-dimensional Heisenberg model in the Swendsen-Wang algorithm

    Science.gov (United States)

    Nonomura, Yoshihiko; Tomita, Yusuke

    2016-01-01

    Recently, it was shown [Y. Nonomura, J. Phys. Soc. Jpn. 83, 113001 (2014), 10.7566/JPSJ.83.113001] that the nonequilibrium critical relaxation of the two-dimensional (2D) Ising model from a perfectly ordered state in the Wolff algorithm is described by stretched-exponential decay, and a universal scaling scheme was found to connect nonequilibrium and equilibrium behaviors. In the present study we extend these findings to vector spin models, and the 3D Heisenberg model could be a typical example. To evaluate the critical temperature and critical exponents precisely using the above scaling scheme, we calculate nonequilibrium ordering from the perfectly disordered state in the Swendsen-Wang algorithm, and we find that the critical ordering process is described by stretched-exponential growth with a comparable exponent to that of the 3D X Y model. The critical exponents evaluated in the present study are consistent with those in previous studies.

  15. The Bilevel Design Problem for Communication Networks on Trains: Model, Algorithm, and Verification

    Directory of Open Access Journals (Sweden)

    Yin Tian

    2014-01-01

    Full Text Available This paper proposes a novel method to solve the problem of train communication network design. Firstly, we put forward a general description of such problem. Then, taking advantage of the bilevel programming theory, we created the cost-reliability-delay model (CRD model that consisted of two parts: the physical topology part aimed at obtaining the networks with the maximum reliability under constrained cost, while the logical topology part focused on the communication paths yielding minimum delay based on the physical topology delivered from upper level. We also suggested a method to solve the CRD model, which combined the genetic algorithm and the Floyd-Warshall algorithm. Finally, we used a practical example to verify the accuracy and the effectiveness of the CRD model and further applied the novel method on a train with six carriages.

  16. Generic Energy Matching Model and Figure of Matching Algorithm for Combined Renewable Energy Systems

    Directory of Open Access Journals (Sweden)

    S. Y. Kan

    2009-08-01

    Full Text Available In this paper the Energy Matching Model and Figure of Matching Algorithm which originally was dedicated only to photovoltaic (PV systems [1] are extended towards a Model and Algorithm suitable for combined systems which are a result of integration of two or more renewable energy sources into one. The systems under investigation will range from mobile portable devices up to the large renewable energy system conceivably to be applied at the Afsluitdijk (Closure- dike in the north of the Netherlands. This Afsluitdijk is the major dam in the Netherlands, damming off the Zuiderzee, a salt water inlet of the North Sea and turning it into the fresh water lake of the IJsselmeer. The energy chain of power supplies based on a combination of renewable energy sources can be modeled by using one generic Energy Matching Model as starting point.

  17. [A Hyperspectral Imagery Anomaly Detection Algorithm Based on Gauss-Markov Model].

    Science.gov (United States)

    Gao, Kun; Liu, Ying; Wang, Li-jing; Zhu, Zhen-yu; Cheng, Hao-bo

    2015-10-01

    With the development of spectral imaging technology, hyperspectral anomaly detection is getting more and more widely used in remote sensing imagery processing. The traditional RX anomaly detection algorithm neglects spatial correlation of images. Besides, it does not validly reduce the data dimension, which costs too much processing time and shows low validity on hyperspectral data. The hyperspectral images follow Gauss-Markov Random Field (GMRF) in space and spectral dimensions. The inverse matrix of covariance matrix is able to be directly calculated by building the Gauss-Markov parameters, which avoids the huge calculation of hyperspectral data. This paper proposes an improved RX anomaly detection algorithm based on three-dimensional GMRF. The hyperspectral imagery data is simulated with GMRF model, and the GMRF parameters are estimated with the Approximated Maximum Likelihood method. The detection operator is constructed with GMRF estimation parameters. The detecting pixel is considered as the centre in a local optimization window, which calls GMRF detecting window. The abnormal degree is calculated with mean vector and covariance inverse matrix, and the mean vector and covariance inverse matrix are calculated within the window. The image is detected pixel by pixel with the moving of GMRF window. The traditional RX detection algorithm, the regional hypothesis detection algorithm based on GMRF and the algorithm proposed in this paper are simulated with AVIRIS hyperspectral data. Simulation results show that the proposed anomaly detection method is able to improve the detection efficiency and reduce false alarm rate. We get the operation time statistics of the three algorithms in the same computer environment. The results show that the proposed algorithm improves the operation time by 45.2%, which shows good computing efficiency.

  18. New porcine test-model reveals remarkable differences between algorithms for spectrophotometrical haemoglobin saturation measurements with VLS

    DEFF Research Database (Denmark)

    Gade, John; Greisen, Gorm

    2016-01-01

    The study created an 'ex vivo' model to test different algorithms for measurements of mucosal haemoglobin saturation with visible light spectrophotometry (VLS). The model allowed comparison between algorithms, but it also allowed comparison with co-oximetry using a 'gold standard' method. This has......  -32.8 to  +29.9 percentage points and from  -5.0 to  +9.2 percentage points, respectively. CONCLUSION: the algorithms showed remarkable in-between differences when tested on raw-spectra from an 'ex vivo' model. All algorithms had bias, more marked at high oxygenation than low oxygenation. Three...

  19. Modelling soil water retention using support vector machines with genetic algorithm optimisation.

    Science.gov (United States)

    Lamorski, Krzysztof; Sławiński, Cezary; Moreno, Felix; Barna, Gyöngyi; Skierucha, Wojciech; Arrue, José L

    2014-01-01

    This work presents point pedotransfer function (PTF) models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: -0.98, -3.10, -9.81, -31.02, -491.66, and -1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM) methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models' parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67-0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches.

  20. Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models

    OpenAIRE

    Hiroki Yoshioka; Kenta Obata

    2011-01-01

    The fraction of vegetation cover (FVC) is often estimated by unmixing a linear mixture model (LMM) to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could ...