WorldWideScience

Sample records for event simulation system

  1. Modeling and simulation of discrete event systems

    CERN Document Server

    Choi, Byoung Kyu

    2013-01-01

    Computer modeling and simulation (M&S) allows engineers to study and analyze complex systems. Discrete-event system (DES)-M&S is used in modern management, industrial engineering, computer science, and the military. As computer speeds and memory capacity increase, so DES-M&S tools become more powerful and more widely used in solving real-life problems. Based on over 20 years of evolution within a classroom environment, as well as on decades-long experience in developing simulation-based solutions for high-tech industries, Modeling and Simulation of Discrete-Event Systems is the only book on

  2. Synchronous Parallel System for Emulation and Discrete Event Simulation

    Science.gov (United States)

    Steinman, Jeffrey S. (Inventor)

    2001-01-01

    A synchronous parallel system for emulation and discrete event simulation having parallel nodes responds to received messages at each node by generating event objects having individual time stamps, stores only the changes to the state variables of the simulation object attributable to the event object and produces corresponding messages. The system refrains from transmitting the messages and changing the state variables while it determines whether the changes are superseded, and then stores the unchanged state variables in the event object for later restoral to the simulation object if called for. This determination preferably includes sensing the time stamp of each new event object and determining which new event object has the earliest time stamp as the local event horizon, determining the earliest local event horizon of the nodes as the global event horizon, and ignoring events whose time stamps are less than the global event horizon. Host processing between the system and external terminals enables such a terminal to query, monitor, command or participate with a simulation object during the simulation process.

  3. Synchronous Parallel Emulation and Discrete Event Simulation System with Self-Contained Simulation Objects and Active Event Objects

    Science.gov (United States)

    Steinman, Jeffrey S. (Inventor)

    1998-01-01

    The present invention is embodied in a method of performing object-oriented simulation and a system having inter-connected processor nodes operating in parallel to simulate mutual interactions of a set of discrete simulation objects distributed among the nodes as a sequence of discrete events changing state variables of respective simulation objects so as to generate new event-defining messages addressed to respective ones of the nodes. The object-oriented simulation is performed at each one of the nodes by assigning passive self-contained simulation objects to each one of the nodes, responding to messages received at one node by generating corresponding active event objects having user-defined inherent capabilities and individual time stamps and corresponding to respective events affecting one of the passive self-contained simulation objects of the one node, restricting the respective passive self-contained simulation objects to only providing and receiving information from die respective active event objects, requesting information and changing variables within a passive self-contained simulation object by the active event object, and producing corresponding messages specifying events resulting therefrom by the active event objects.

  4. Discrete-Event Simulation

    Directory of Open Access Journals (Sweden)

    Prateek Sharma

    2015-04-01

    Full Text Available Abstract Simulation can be regarded as the emulation of the behavior of a real-world system over an interval of time. The process of simulation relies upon the generation of the history of a system and then analyzing that history to predict the outcome and improve the working of real systems. Simulations can be of various kinds but the topic of interest here is one of the most important kind of simulation which is Discrete-Event Simulation which models the system as a discrete sequence of events in time. So this paper aims at introducing about Discrete-Event Simulation and analyzing how it is beneficial to the real world systems.

  5. Discrete-Event Simulation

    OpenAIRE

    Prateek Sharma

    2015-01-01

    Abstract Simulation can be regarded as the emulation of the behavior of a real-world system over an interval of time. The process of simulation relies upon the generation of the history of a system and then analyzing that history to predict the outcome and improve the working of real systems. Simulations can be of various kinds but the topic of interest here is one of the most important kind of simulation which is Discrete-Event Simulation which models the system as a discrete sequence of ev...

  6. Event-by-event simulation of quantum phenomena

    NARCIS (Netherlands)

    De Raedt, Hans; Michielsen, Kristel

    A discrete-event simulation approach is reviewed that does not require the knowledge of the solution of the wave equation of the whole system, yet reproduces the statistical distributions of wave theory by generating detection events one-by-one. The simulation approach is illustrated by applications

  7. A Numerical Approach for Hybrid Simulation of Power System Dynamics Considering Extreme Icing Events

    DEFF Research Database (Denmark)

    Chen, Lizheng; Zhang, Hengxu; Wu, Qiuwei

    2017-01-01

    numerical simulation scheme integrating icing weather events with power system dynamics is proposed to extend power system numerical simulation. A technique is developed to efficiently simulate the interaction of slow dynamics of weather events and fast dynamics of power systems. An extended package for PSS...

  8. Out-of-order parallel discrete event simulation for electronic system-level design

    CERN Document Server

    Chen, Weiwei

    2014-01-01

    This book offers readers a set of new approaches and tools a set of tools and techniques for facing challenges in parallelization with design of embedded systems.? It provides an advanced parallel simulation infrastructure for efficient and effective system-level model validation and development so as to build better products in less time.? Since parallel discrete event simulation (PDES) has the potential to exploit the underlying parallel computational capability in today's multi-core simulation hosts, the author begins by reviewing the parallelization of discrete event simulation, identifyin

  9. On constructing optimistic simulation algorithms for the discrete event system specification

    International Nuclear Information System (INIS)

    Nutaro, James J.

    2008-01-01

    This article describes a Time Warp simulation algorithm for discrete event models that are described in terms of the Discrete Event System Specification (DEVS). The article shows how the total state transition and total output function of a DEVS atomic model can be transformed into an event processing procedure for a logical process. A specific Time Warp algorithm is constructed around this logical process, and it is shown that the algorithm correctly simulates a DEVS coupled model that consists entirely of interacting atomic models. The simulation algorithm is presented abstractly; it is intended to provide a basis for implementing efficient and scalable parallel algorithms that correctly simulate DEVS models

  10. Discrete event simulation versus conventional system reliability analysis approaches

    DEFF Research Database (Denmark)

    Kozine, Igor

    2010-01-01

    Discrete Event Simulation (DES) environments are rapidly developing and appear to be promising tools for building reliability and risk analysis models of safety-critical systems and human operators. If properly developed, they are an alternative to the conventional human reliability analysis models...... and systems analysis methods such as fault and event trees and Bayesian networks. As one part, the paper describes briefly the author’s experience in applying DES models to the analysis of safety-critical systems in different domains. The other part of the paper is devoted to comparing conventional approaches...

  11. Nuclear facility safeguards systems modeling using discrete event simulation

    International Nuclear Information System (INIS)

    Engi, D.

    1977-01-01

    The threat of theft or dispersal of special nuclear material at a nuclear facility is treated by studying the temporal relationships between adversaries having authorized access to the facility (insiders) and safeguards system events by using a GASP IV discrete event simulation. The safeguards system events--detection, assessment, delay, communications, and neutralization--are modeled for the general insider adversary strategy which includes degradation of the safeguards system elements followed by an attempt to steal or disperse special nuclear material. The performance measure used in the analysis is the estimated probability of safeguards system success in countering the adversary based upon a predetermined set of adversary actions. An exemplary problem which includes generated results is presented for a hypothetical nuclear facility. The results illustrate representative information that could be utilized by safeguards decision-makers

  12. Synchronization Techniques in Parallel Discrete Event Simulation

    OpenAIRE

    Lindén, Jonatan

    2018-01-01

    Discrete event simulation is an important tool for evaluating system models in many fields of science and engineering. To improve the performance of large-scale discrete event simulations, several techniques to parallelize discrete event simulation have been developed. In parallel discrete event simulation, the work of a single discrete event simulation is distributed over multiple processing elements. A key challenge in parallel discrete event simulation is to ensure that causally dependent ...

  13. Synchronization Of Parallel Discrete Event Simulations

    Science.gov (United States)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  14. A novel approach for modelling complex maintenance systems using discrete event simulation

    International Nuclear Information System (INIS)

    Alrabghi, Abdullah; Tiwari, Ashutosh

    2016-01-01

    Existing approaches for modelling maintenance rely on oversimplified assumptions which prevent them from reflecting the complexity found in industrial systems. In this paper, we propose a novel approach that enables the modelling of non-identical multi-unit systems without restrictive assumptions on the number of units or their maintenance characteristics. Modelling complex interactions between maintenance strategies and their effects on assets in the system is achieved by accessing event queues in Discrete Event Simulation (DES). The approach utilises the wide success DES has achieved in manufacturing by allowing integration with models that are closely related to maintenance such as production and spare parts systems. Additional advantages of using DES include rapid modelling and visual interactive simulation. The proposed approach is demonstrated in a simulation based optimisation study of a published case. The current research is one of the first to optimise maintenance strategies simultaneously with their parameters while considering production dynamics and spare parts management. The findings of this research provide insights for non-conflicting objectives in maintenance systems. In addition, the proposed approach can be used to facilitate the simulation and optimisation of industrial maintenance systems. - Highlights: • This research is one of the first to optimise maintenance strategies simultaneously. • New insights for non-conflicting objectives in maintenance systems. • The approach can be used to optimise industrial maintenance systems.

  15. Simulation of interim spent fuel storage system with discrete event model

    International Nuclear Information System (INIS)

    Yoon, Wan Ki; Song, Ki Chan; Lee, Jae Sol; Park, Hyun Soo

    1989-01-01

    This paper describes dynamic simulation of the spent fuel storage system which is described by statistical discrete event models. It visualizes flow and queue of system over time, assesses the operational performance of the system activities and establishes the system components and streams. It gives information on system organization and operation policy with reference to the design. System was tested and analyzed over a number of critical parameters to establish the optimal system. Workforce schedule and resources with long processing time dominate process. A combination of two workforce shifts a day and two cooling pits gives the optimal solution of storage system. Discrete system simulation is an useful tool to get information on optimal design and operation of the storage system. (Author)

  16. Simulating events

    Energy Technology Data Exchange (ETDEWEB)

    Ferretti, C; Bruzzone, L [Techint Italimpianti, Milan (Italy)

    2000-06-01

    The Petacalco Marine terminal on the Pacific coast in the harbour of Lazaro Carclenas (Michoacan) in Mexico, provides coal to the thermoelectric power plant at Pdte Plutarco Elias Calles in the port area. The plant is being converted from oil to burn coal to generate 2100 MW of power. The article describes the layout of the terminal and equipment employed in the unloading, coal stacking, coal handling areas and the receiving area at the power plant. The contractor Techint Italimpianti has developed a software system, MHATIS, for marine terminal management which is nearly complete. The discrete event simulator with its graphic interface provides a real-type decision support system for simulating changes to the terminal operations and evaluating impacts. The article describes how MHATIS is used. 7 figs.

  17. Program For Parallel Discrete-Event Simulation

    Science.gov (United States)

    Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.

    1991-01-01

    User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.

  18. Discrete event simulation methods applied to advanced importance measures of repairable components in multistate network flow systems

    International Nuclear Information System (INIS)

    Huseby, Arne B.; Natvig, Bent

    2013-01-01

    Discrete event models are frequently used in simulation studies to model and analyze pure jump processes. A discrete event model can be viewed as a system consisting of a collection of stochastic processes, where the states of the individual processes change as results of various kinds of events occurring at random points of time. We always assume that each event only affects one of the processes. Between these events the states of the processes are considered to be constant. In the present paper we use discrete event simulation in order to analyze a multistate network flow system of repairable components. In order to study how the different components contribute to the system, it is necessary to describe the often complicated interaction between component processes and processes at the system level. While analytical considerations may throw some light on this, a simulation study often allows the analyst to explore more details. By producing stable curve estimates for the development of the various processes, one gets a much better insight in how such systems develop over time. These methods are particulary useful in the study of advanced importancez measures of repairable components. Such measures can be very complicated, and thus impossible to calculate analytically. By using discrete event simulations, however, this can be done in a very natural and intuitive way. In particular significant differences between the Barlow–Proschan measure and the Natvig measure in multistate network flow systems can be explored

  19. Rare event simulation using Monte Carlo methods

    CERN Document Server

    Rubino, Gerardo

    2009-01-01

    In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...

  20. Event-by-event simulation of quantum phenomena

    NARCIS (Netherlands)

    De Raedt, H.; Zhao, S.; Yuan, S.; Jin, F.; Michielsen, K.; Miyashita, S.

    We discuss recent progress in the development of simulation algorithms that do not rely on any concept of quantum theory but are nevertheless capable of reproducing the averages computed from quantum theory through an event-by-event simulation. The simulation approach is illustrated by applications

  1. Self-Adaptive Event-Driven Simulation of Multi-Scale Plasma Systems

    Science.gov (United States)

    Omelchenko, Yuri; Karimabadi, Homayoun

    2005-10-01

    Multi-scale plasmas pose a formidable computational challenge. The explicit time-stepping models suffer from the global CFL restriction. Efficient application of adaptive mesh refinement (AMR) to systems with irregular dynamics (e.g. turbulence, diffusion-convection-reaction, particle acceleration etc.) may be problematic. To address these issues, we developed an alternative approach to time stepping: self-adaptive discrete-event simulation (DES). DES has origin in operations research, war games and telecommunications. We combine finite-difference and particle-in-cell techniques with this methodology by assuming two caveats: (1) a local time increment, dt for a discrete quantity f can be expressed in terms of a physically meaningful quantum value, df; (2) f is considered to be modified only when its change exceeds df. Event-driven time integration is self-adaptive as it makes use of causality rules rather than parametric time dependencies. This technique enables asynchronous flux-conservative update of solution in accordance with local temporal scales, removes the curse of the global CFL condition, eliminates unnecessary computation in inactive spatial regions and results in robust and fast parallelizable codes. It can be naturally combined with various mesh refinement techniques. We discuss applications of this novel technology to diffusion-convection-reaction systems and hybrid simulations of magnetosonic shocks.

  2. Parallel discrete event simulation: A shared memory approach

    Science.gov (United States)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1987-01-01

    With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.

  3. Discrete-event simulation for the design and evaluation of physical protection systems

    International Nuclear Information System (INIS)

    Jordan, S.E.; Snell, M.K.; Madsen, M.M.; Smith, J.S.; Peters, B.A.

    1998-01-01

    This paper explores the use of discrete-event simulation for the design and control of physical protection systems for fixed-site facilities housing items of significant value. It begins by discussing several modeling and simulation activities currently performed in designing and analyzing these protection systems and then discusses capabilities that design/analysis tools should have. The remainder of the article then discusses in detail how some of these new capabilities have been implemented in software to achieve a prototype design and analysis tool. The simulation software technology provides a communications mechanism between a running simulation and one or more external programs. In the prototype security analysis tool, these capabilities are used to facilitate human-in-the-loop interaction and to support a real-time connection to a virtual reality (VR) model of the facility being analyzed. This simulation tool can be used for both training (in real-time mode) and facility analysis and design (in fast mode)

  4. A hybrid load flow and event driven simulation approach to multi-state system reliability evaluation

    International Nuclear Information System (INIS)

    George-Williams, Hindolo; Patelli, Edoardo

    2016-01-01

    Structural complexity of systems, coupled with their multi-state characteristics, renders their reliability and availability evaluation difficult. Notwithstanding the emergence of various techniques dedicated to complex multi-state system analysis, simulation remains the only approach applicable to realistic systems. However, most simulation algorithms are either system specific or limited to simple systems since they require enumerating all possible system states, defining the cut-sets associated with each state and monitoring their occurrence. In addition to being extremely tedious for large complex systems, state enumeration and cut-set definition require a detailed understanding of the system's failure mechanism. In this paper, a simple and generally applicable simulation approach, enhanced for multi-state systems of any topology is presented. Here, each component is defined as a Semi-Markov stochastic process and via discrete-event simulation, the operation of the system is mimicked. The principles of flow conservation are invoked to determine flow across the system for every performance level change of its components using the interior-point algorithm. This eliminates the need for cut-set definition and overcomes the limitations of existing techniques. The methodology can also be exploited to account for effects of transmission efficiency and loading restrictions of components on system reliability and performance. The principles and algorithms developed are applied to two numerical examples to demonstrate their applicability. - Highlights: • A discrete event simulation model based on load flow principles. • Model does not require system path or cut sets. • Applicable to binary and multi-state systems of any topology. • Supports multiple output systems with competing demand. • Model is intuitive and generally applicable.

  5. Discrete-event system simulation on small and medium enterprises productivity improvement

    Science.gov (United States)

    Sulistio, J.; Hidayah, N. A.

    2017-12-01

    Small and medium industries in Indonesia is currently developing. The problem faced by SMEs is the difficulty of meeting growing demand coming into the company. Therefore, SME need an analysis and evaluation on its production process in order to meet all orders. The purpose of this research is to increase the productivity of SMEs production floor by applying discrete-event system simulation. This method preferred because it can solve complex problems die to the dynamic and stochastic nature of the system. To increase the credibility of the simulation, model validated by cooperating the average of two trials, two trials of variance and chi square test. Afterwards, Benferroni method applied to development several alternatives. The article concludes that, the productivity of SMEs production floor increased up to 50% by adding the capacity of dyeing and drying machines.

  6. A PC-based discrete event simulation model of the civilian radioactive waste management system

    International Nuclear Information System (INIS)

    Airth, G.L.; Joy, D.S.; Nehls, J.W.

    1992-01-01

    This paper discusses a System Simulation Model which has been developed for the Department of Energy to simulate the movement of individual waste packages (spent fuel assemblies and fuel containers) through the Civilian Radioactive Waste Management System (CRWMS). A discrete event simulation language, GPSS/PC, which runs on an IBM/PC and operates under DOS 5.0, mathematically represents the movement and processing of radioactive waste packages through the CRWMS and the interaction of these packages with the equipment in the various facilities. The major features of the System Simulation Model are: the ability to reference characteristics of the different types of radioactive waste (age, burnup, etc.) in order to make operational and/or system design decisions, the ability to place stochastic variations on operational parameters such as processing time and equipment outages, and the ability to include a rigorous simulation of the transportation system. Output from the model includes the numbers, types, and characteristics of waste packages at selected points in the CRWMS and the extent to which various resources will be utilized in order to transport, process, and emplace the waste

  7. Simulating the influence of life trajectory events on transport mode behavior in an agent-based system

    NARCIS (Netherlands)

    Verhoeven, M.; Arentze, T.A.; Timmermans, H.J.P.; Waerden, van der P.J.H.J.

    2007-01-01

    this paper describes the results of a study on the impact of lifecycle or life trajectory events on activity-travel decisions. This lifecycle trajectory of individual agents can be easily incorporated in an agent-based simulation system. This paper focuses on two lifecycle events, change in

  8. CONFIG - Adapting qualitative modeling and discrete event simulation for design of fault management systems

    Science.gov (United States)

    Malin, Jane T.; Basham, Bryan D.

    1989-01-01

    CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.

  9. Modeling energy market dynamics using discrete event system simulation

    International Nuclear Information System (INIS)

    Gutierrez-Alcaraz, G.; Sheble, G.B.

    2009-01-01

    This paper proposes the use of Discrete Event System Simulation to study the interactions among fuel and electricity markets and consumers, and the decision-making processes of fuel companies (FUELCOs), generation companies (GENCOs), and consumers in a simple artificial energy market. In reality, since markets can reach a stable equilibrium or fail, it is important to observe how they behave in a dynamic framework. We consider a Nash-Cournot model in which marketers are depicted as Nash-Cournot players that determine supply to meet end-use consumption. Detailed engineering considerations such as transportation network flows are omitted, because the focus is upon the selection and use of appropriate market models to provide answers to policy questions. (author)

  10. LCG MCDB - a Knowledgebase of Monte Carlo Simulated Events

    CERN Document Server

    Belov, S; Galkin, E; Gusev, A; Pokorski, Witold; Sherstnev, A V

    2008-01-01

    In this paper we report on LCG Monte Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC collaborations by experts. In many cases, the modern Monte Carlo simulation of physical processes requires expert knowledge in Monte Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project.

  11. Integrated hydraulic and organophosphate pesticide injection simulations for enhancing event detection in water distribution systems.

    Science.gov (United States)

    Schwartz, Rafi; Lahav, Ori; Ostfeld, Avi

    2014-10-15

    As a complementary step towards solving the general event detection problem of water distribution systems, injection of the organophosphate pesticides, chlorpyrifos (CP) and parathion (PA), were simulated at various locations within example networks and hydraulic parameters were calculated over 24-h duration. The uniqueness of this study is that the chemical reactions and byproducts of the contaminants' oxidation were also simulated, as well as other indicative water quality parameters such as alkalinity, acidity, pH and the total concentration of free chlorine species. The information on the change in water quality parameters induced by the contaminant injection may facilitate on-line detection of an actual event involving this specific substance and pave the way to development of a generic methodology for detecting events involving introduction of pesticides into water distribution systems. Simulation of the contaminant injection was performed at several nodes within two different networks. For each injection, concentrations of the relevant contaminants' mother and daughter species, free chlorine species and water quality parameters, were simulated at nodes downstream of the injection location. The results indicate that injection of these substances can be detected at certain conditions by a very rapid drop in Cl2, functioning as the indicative parameter, as well as a drop in alkalinity concentration and a small decrease in pH, both functioning as supporting parameters, whose usage may reduce false positive alarms. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. A PC-based discrete event simulation model of the Civilian Radioactive Waste Management System

    International Nuclear Information System (INIS)

    Airth, G.L.; Joy, D.S.; Nehls, J.W.

    1991-01-01

    A System Simulation Model has been developed for the Department of Energy to simulate the movement of individual waste packages (spent fuel assemblies and fuel containers) through the Civilian Radioactive Waste Management System (CRWMS). A discrete event simulation language, GPSS/PC, which runs on an IBM/PC and operates under DOS 5.0, mathematically represents the movement and processing of radioactive waste packages through the CRWMS and the interaction of these packages with the equipment in the various facilities. This model can be used to quantify the impacts of different operating schedules, operational rules, system configurations, and equipment reliability and availability considerations on the performance of processes comprising the CRWMS and how these factors combine to determine overall system performance for the purpose of making system design decisions. The major features of the System Simulation Model are: the ability to reference characteristics of the different types of radioactive waste (age, burnup, etc.) in order to make operational and/or system design decisions, the ability to place stochastic variations on operational parameters such as processing time and equipment outages, and the ability to include a rigorous simulation of the transportation system. Output from the model includes the numbers, types, and characteristics of waste packages at selected points in the CRWMS and the extent to which various resources will be utilized in order to transport, process, and emplace the waste

  13. Integrating physically based simulators with Event Detection Systems: Multi-site detection approach.

    Science.gov (United States)

    Housh, Mashor; Ohar, Ziv

    2017-03-01

    The Fault Detection (FD) Problem in control theory concerns of monitoring a system to identify when a fault has occurred. Two approaches can be distinguished for the FD: Signal processing based FD and Model-based FD. The former concerns of developing algorithms to directly infer faults from sensors' readings, while the latter uses a simulation model of the real-system to analyze the discrepancy between sensors' readings and expected values from the simulation model. Most contamination Event Detection Systems (EDSs) for water distribution systems have followed the signal processing based FD, which relies on analyzing the signals from monitoring stations independently of each other, rather than evaluating all stations simultaneously within an integrated network. In this study, we show that a model-based EDS which utilizes a physically based water quality and hydraulics simulation models, can outperform the signal processing based EDS. We also show that the model-based EDS can facilitate the development of a Multi-Site EDS (MSEDS), which analyzes the data from all the monitoring stations simultaneously within an integrated network. The advantage of the joint analysis in the MSEDS is expressed by increased detection accuracy (higher true positive alarms and fewer false alarms) and shorter detection time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Event-based Simulation Model for Quantum Optics Experiments

    NARCIS (Netherlands)

    De Raedt, H.; Michielsen, K.; Jaeger, G; Khrennikov, A; Schlosshauer, M; Weihs, G

    2011-01-01

    We present a corpuscular simulation model of optical phenomena that does not require the knowledge of the solution of a wave equation of the whole system and reproduces the results of Maxwell's theory by generating detection events one-by-one. The event-based corpuscular model gives a unified

  15. DECISION WITH ARTIFICIAL NEURAL NETWORKS IN DISCRETE EVENT SIMULATION MODELS ON A TRAFFIC SYSTEM

    Directory of Open Access Journals (Sweden)

    Marília Gonçalves Dutra da Silva

    2016-04-01

    Full Text Available ABSTRACT This work aims to demonstrate the use of a mechanism to be applied in the development of the discrete-event simulation models that perform decision operations through the implementation of an artificial neural network. Actions that involve complex operations performed by a human agent in a process, for example, are often modeled in simplified form with the usual mechanisms of simulation software. Therefore, it was chosen a traffic system controlled by a traffic officer with a flow of vehicles and pedestrians to demonstrate the proposed solution. From a module built in simulation software itself, it was possible to connect the algorithm for intelligent decision to the simulation model. The results showed that the model elaborated responded as expected when it was submitted to actions, which required different decisions to maintain the operation of the system with changes in the flow of people and vehicles.

  16. Discrete event simulation of Maglev transport considering traffic waves

    Directory of Open Access Journals (Sweden)

    Moo Hyun Cha

    2014-10-01

    Full Text Available A magnetically levitated vehicle (Maglev system is under commercialization as a new transportation system in Korea. The Maglev is operated by an unmanned automatic control system. Therefore, the plan of train operation should be carefully established and validated in advance. In general, when making a train operation plan, statistically predicted traffic data is used. However, a traffic wave often occurs in real train service, and demand-driven simulation technology is required to review a train operation plan and service quality considering traffic waves. We propose a method and model to simulate Maglev operation considering continuous demand changes. For this purpose, we employed a discrete event model that is suitable for modeling the behavior of railway passenger transportation. We modeled the system hierarchically using discrete event system specification (DEVS formalism. In addition, through implementation and an experiment using the DEVSim++ simulation environment, we tested the feasibility of the proposed model. Our experimental results also verified that our demand-driven simulation technology can be used for a priori review of train operation plans and strategies.

  17. Simulation of thermal-neutron-induced single-event upset using particle and heavy-ion transport code system

    International Nuclear Information System (INIS)

    Arita, Yutaka; Kihara, Yuji; Mitsuhasi, Junichi; Niita, Koji; Takai, Mikio; Ogawa, Izumi; Kishimoto, Tadafumi; Yoshihara, Tsutomu

    2007-01-01

    The simulation of a thermal-neutron-induced single-event upset (SEU) was performed on a 0.4-μm-design-rule 4 Mbit static random access memory (SRAM) using particle and heavy-ion transport code system (PHITS): The SEU rates obtained by the simulation were in very good agreement with the result of experiments. PHITS is a useful tool for simulating SEUs in semiconductor devices. To further improve the accuracy of the simulation, additional methods for tallying the energy deposition are required for PHITS. (author)

  18. Parallel Stochastic discrete event simulation of calcium dynamics in neuron.

    Science.gov (United States)

    Ishlam Patoary, Mohammad Nazrul; Tropper, Carl; McDougal, Robert A; Zhongwei, Lin; Lytton, William W

    2017-09-26

    The intra-cellular calcium signaling pathways of a neuron depends on both biochemical reactions and diffusions. Some quasi-isolated compartments (e.g. spines) are so small and calcium concentrations are so low that one extra molecule diffusing in by chance can make a nontrivial difference in its concentration (percentage-wise). These rare events can affect dynamics discretely in such way that they cannot be evaluated by a deterministic simulation. Stochastic models of such a system provide a more detailed understanding of these systems than existing deterministic models because they capture their behavior at a molecular level. Our research focuses on the development of a high performance parallel discrete event simulation environment, Neuron Time Warp (NTW), which is intended for use in the parallel simulation of stochastic reaction-diffusion systems such as intra-calcium signaling. NTW is integrated with NEURON, a simulator which is widely used within the neuroscience community. We simulate two models, a calcium buffer and a calcium wave model. The calcium buffer model is employed in order to verify the correctness and performance of NTW by comparing it to a serial deterministic simulation in NEURON. We also derived a discrete event calcium wave model from a deterministic model using the stochastic IP3R structure.

  19. Power system restoration: planning and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hazarika, D. [Assam Engineering Coll., Dept. of Electrical Engineering, Assam (India); Sinha, A.K. [Inidan Inst. of Technology, Dept. of Electrical Engineering, Kharagpur (India)

    2003-03-01

    This paper describes a restoration guidance simulator, which allows power system operator/planner to simulate and plan restoration events in an interactive mode. The simulator provides a list of restoration events according to the priority based on some restoration rules and list of priority loads. It also provides in an interactive mode the list of events, which becomes possible as the system grows during restoration. Further, the selected event is validated through a load flow and other analytical tools to show the consequences of implementing the planned event. (Author)

  20. Reproductive Health Services Discrete-Event Simulation

    OpenAIRE

    Lee, Sungjoo; Giles, Denise F.; Goldsman, David; Cook, Douglas A.; Mishra, Ninad; McCarthy, Brian

    2006-01-01

    Low resource healthcare environments are often characteristic of patient flow patterns with varying patient risks, extensive patient waiting times, uneven workload distributions, and inefficient service delivery. Models from industrial and systems engineering allow for a greater examination of processes by applying discrete-event computer simulation techniques to evaluate and optimize hospital performance.

  1. Discrete event simulation as an ergonomic tool to predict workload exposures during systems design

    NARCIS (Netherlands)

    Perez, J.; Looze, M.P. de; Bosch, T.; Neumann, W.P.

    2014-01-01

    This methodological paper presents a novel approach to predict operator's mechanical exposure and fatigue accumulation in discrete event simulations. A biomechanical model of work-cycle loading is combined with a discrete event simulation model which provides work cycle patterns over the shift

  2. Event-by-event simulation of quantum cryptography protocols

    NARCIS (Netherlands)

    Zhao, S.; Raedt, H. De

    We present a new approach to simulate quantum cryptography protocols using event-based processes. The method is validated by simulating the BB84 protocol and the Ekert protocol, both without and with the presence of an eavesdropper.

  3. Discrete event simulation tool for analysis of qualitative models of continuous processing systems

    Science.gov (United States)

    Malin, Jane T. (Inventor); Basham, Bryan D. (Inventor); Harris, Richard A. (Inventor)

    1990-01-01

    An artificial intelligence design and qualitative modeling tool is disclosed for creating computer models and simulating continuous activities, functions, and/or behavior using developed discrete event techniques. Conveniently, the tool is organized in four modules: library design module, model construction module, simulation module, and experimentation and analysis. The library design module supports the building of library knowledge including component classes and elements pertinent to a particular domain of continuous activities, functions, and behavior being modeled. The continuous behavior is defined discretely with respect to invocation statements, effect statements, and time delays. The functionality of the components is defined in terms of variable cluster instances, independent processes, and modes, further defined in terms of mode transition processes and mode dependent processes. Model construction utilizes the hierarchy of libraries and connects them with appropriate relations. The simulation executes a specialized initialization routine and executes events in a manner that includes selective inherency of characteristics through a time and event schema until the event queue in the simulator is emptied. The experimentation and analysis module supports analysis through the generation of appropriate log files and graphics developments and includes the ability of log file comparisons.

  4. Teleradiology system analysis using a discrete event-driven block-oriented network simulator

    Science.gov (United States)

    Stewart, Brent K.; Dwyer, Samuel J., III

    1992-07-01

    Performance evaluation and trade-off analysis are the central issues in the design of communication networks. Simulation plays an important role in computer-aided design and analysis of communication networks and related systems, allowing testing of numerous architectural configurations and fault scenarios. We are using the Block Oriented Network Simulator (BONeS, Comdisco, Foster City, CA) software package to perform discrete, event- driven Monte Carlo simulations in capacity planning, tradeoff analysis and evaluation of alternate architectures for a high-speed, high-resolution teleradiology project. A queuing network model of the teleradiology system has been devise, simulations executed and results analyzed. The wide area network link uses a switched, dial-up N X 56 kbps inverting multiplexer where the number of digital voice-grade lines (N) can vary from one (DS-0) through 24 (DS-1). The proposed goal of such a system is 200 films (2048 X 2048 X 12-bit) transferred between a remote and local site in an eight hour period with a mean delay time less than five minutes. It is found that: (1) the DS-1 service limit is around 100 films per eight hour period with a mean delay time of 412 +/- 39 seconds, short of the goal stipulated above; (2) compressed video teleconferencing can be run simultaneously with image data transfer over the DS-1 wide area network link without impacting the performance of the described teleradiology system; (3) there is little sense in upgrading to a higher bandwidth WAN link like DS-2 or DS-3 for the current system; and (4) the goal of transmitting 200 films in an eight hour period with a mean delay time less than five minutes can be achieved simply if the laser printer interface is updated from the current DR-11W interface to a much faster SCSI interface.

  5. Studies on switch-based event building systems in RD13

    International Nuclear Information System (INIS)

    Bee, C.P.; Eshghi, S.; Jones, R.

    1996-01-01

    One of the goals of the RD13 project at CERN is to investigate the feasibility of parallel event building system for detectors at the LHC. Studies were performed by building a prototype based on the HiPPI standard and by modeling this prototype and extended architectures with MODSIM II. The prototype used commercially available VME-HiPPI interfaces and a HiPPI switch together with a modular software. The setup was tested successfully as a parallel event building system in different configurations and with different data flow control schemes. The simulation program was used with realistic parameters from the prototype measurements to simulate large-scale event building systems. This includes simulations of a realistic setup of the ATLAS event building system. The influence of different parameters and scaling behavior were investigated. The influence of realistic event size distributions was checked with data from off-line simulations. Different control schemes for destination assignment and traffic shaping were investigated as well as a two-stage event building system. (author)

  6. Discrete Event Simulation Computers can be used to simulate the ...

    Indian Academy of Sciences (India)

    IAS Admin

    people who use computers every moment of their waking lives, others even ... How is discrete event simulation different from other kinds of simulation? ... time, energy consumption .... Schedule the CustomerDeparture event for this customer.

  7. Event-driven simulation of neural population synchronization facilitated by electrical coupling.

    Science.gov (United States)

    Carrillo, Richard R; Ros, Eduardo; Barbour, Boris; Boucheny, Christian; Coenen, Olivier

    2007-02-01

    Most neural communication and processing tasks are driven by spikes. This has enabled the application of the event-driven simulation schemes. However the simulation of spiking neural networks based on complex models that cannot be simplified to analytical expressions (requiring numerical calculation) is very time consuming. Here we describe briefly an event-driven simulation scheme that uses pre-calculated table-based neuron characterizations to avoid numerical calculations during a network simulation, allowing the simulation of large-scale neural systems. More concretely we explain how electrical coupling can be simulated efficiently within this computation scheme, reproducing synchronization processes observed in detailed simulations of neural populations.

  8. Parallel discrete event simulation using shared memory

    Science.gov (United States)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1988-01-01

    With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.

  9. Using a discrete-event simulation to balance ambulance availability and demand in static deployment systems.

    Science.gov (United States)

    Wu, Ching-Han; Hwang, Kevin P

    2009-12-01

    To improve ambulance response time, matching ambulance availability with the emergency demand is crucial. To maintain the standard of 90% of response times within 9 minutes, the authors introduce a discrete-event simulation method to estimate the threshold for expanding the ambulance fleet when demand increases and to find the optimal dispatching strategies when provisional events create temporary decreases in ambulance availability. The simulation model was developed with information from the literature. Although the development was theoretical, the model was validated on the emergency medical services (EMS) system of Tainan City. The data are divided: one part is for model development, and the other for validation. For increasing demand, the effect was modeled on response time when call arrival rates increased. For temporary availability decreases, the authors simulated all possible alternatives of ambulance deployment in accordance with the number of out-of-routine-duty ambulances and the durations of three types of mass gatherings: marathon races (06:00-10:00 hr), rock concerts (18:00-22:00 hr), and New Year's Eve parties (20:00-01:00 hr). Statistical analysis confirmed that the model reasonably represented the actual Tainan EMS system. The response-time standard could not be reached when the incremental ratio of call arrivals exceeded 56%, which is the threshold for the Tainan EMS system to expand its ambulance fleet. When provisional events created temporary availability decreases, the Tainan EMS system could spare at most two ambulances from the standard configuration, except between 20:00 and 01:00, when it could spare three. The model also demonstrated that the current Tainan EMS has two excess ambulances that could be dropped. The authors suggest dispatching strategies to minimize the response times in routine daily emergencies. Strategies of capacity management based on this model improved response times. The more ambulances that are out of routine duty

  10. Benchmarking Simulation of Long Term Station Blackout Events

    International Nuclear Information System (INIS)

    Kim, Sung Kyum; Lee, John C.; Fynan, Douglas A.; Lee, John C.

    2013-01-01

    The importance of passive cooling systems has emerged since the SBO events. Turbine-driven auxiliary feedwater (TD-AFW) system is the only passive cooling system for steam generators (SGs) in current PWRs. During SBO events, all alternating current (AC) and direct current (DC) are interrupted and then the water levels of steam generators become high. In this case, turbine blades could be degraded and cannot cool down the SGs anymore. To prevent this kind of degradations, improved TD-AFW system should be installed for current PWRs, especially OPR 1000 plants. A long-term station blackout (LTSBO) scenario based on the improved TD-AFW system has been benchmarked as a reference input file. The following task is a safety analysis in order to find some important parameters causing the peak cladding temperature (PCT) to vary. This task has been initiated with the benchmarked input deck applying to the State-of-the-Art Reactor Consequence Analyses (SOARCA) Report. The point of the improved TD-AFW is to control the water level of the SG by using the auxiliary battery charged by a generator connected with the auxiliary turbine. However, this battery also could be disconnected from the generator. To analyze the uncertainties of the failure of the auxiliary battery, the simulation for the time-dependent failure of the TD-AFW has been performed. In addition to the cases simulated in the paper, some valves (e. g., pressurizer safety valve), available during SBO events in the paper, could be important parameters to assess uncertainties in PCTs estimated. The results for these parameters will be included in a future study in addition to the results for the leakage of the RCP seals. After the simulation of several transient cases, alternating conditional expectation (ACE) algorithm will be used to derive functional relationships between the PCT and several system parameters

  11. Distributed simulation of large computer systems

    International Nuclear Information System (INIS)

    Marzolla, M.

    2001-01-01

    Sequential simulation of large complex physical systems is often regarded as a computationally expensive task. In order to speed-up complex discrete-event simulations, the paradigm of Parallel and Distributed Discrete Event Simulation (PDES) has been introduced since the late 70s. The authors analyze the applicability of PDES to the modeling and analysis of large computer system; such systems are increasingly common in the area of High Energy and Nuclear Physics, because many modern experiments make use of large 'compute farms'. Some feasibility tests have been performed on a prototype distributed simulator

  12. Algorithm and simulation development in support of response strategies for contamination events in air and water systems.

    Energy Technology Data Exchange (ETDEWEB)

    Waanders, Bart Van Bloemen

    2006-01-01

    Chemical/Biological/Radiological (CBR) contamination events pose a considerable threat to our nation's infrastructure, especially in large internal facilities, external flows, and water distribution systems. Because physical security can only be enforced to a limited degree, deployment of early warning systems is being considered. However to achieve reliable and efficient functionality, several complex questions must be answered: (1) where should sensors be placed, (2) how can sparse sensor information be efficiently used to determine the location of the original intrusion, (3) what are the model and data uncertainties, (4) how should these uncertainties be handled, and (5) how can our algorithms and forward simulations be sufficiently improved to achieve real time performance? This report presents the results of a three year algorithmic and application development to support the identification, mitigation, and risk assessment of CBR contamination events. The main thrust of this investigation was to develop (1) computationally efficient algorithms for strategically placing sensors, (2) identification process of contamination events by using sparse observations, (3) characterization of uncertainty through developing accurate demands forecasts and through investigating uncertain simulation model parameters, (4) risk assessment capabilities, and (5) reduced order modeling methods. The development effort was focused on water distribution systems, large internal facilities, and outdoor areas.

  13. LHC@Home: A Volunteer computing system for Massive Numerical Simulations of Beam Dynamics and High Energy Physics Events

    CERN Document Server

    Giovannozzi, M; Høimyr, N; Jones, PL; Karneyeu, A; Marquina, MA; McIntosh, E; Segal, B; Skands, P; Grey, F; Lombraña González, D; Rivkin, L; Zacharov, I

    2012-01-01

    Recently, the LHC@home system has been revived at CERN. It is a volunteer computing system based on BOINC which boosts the available CPU-power in institutional computer centres with the help of individuals that donate the CPU-time of their PCs. Currently two projects are hosted on the system, namely SixTrack and Test4Theory. The first is aimed at performing beam dynamics simulations, while the latter deals with the simulation of high-energy events. In this paper the details of the global system, as well a discussion of the capabilities of each project will be presented.

  14. Corpuscular event-by-event simulation of quantum optics experiments : application to a quantum-controlled delayed-choice experiment

    NARCIS (Netherlands)

    De Raedt, Hans; Delina, M; Jin, Fengping; Michielsen, Kristel

    2012-01-01

    A corpuscular simulation model of optical phenomena that does not require knowledge of the solution of a wave equation of the whole system and reproduces the results of Maxwell's theory by generating detection events one by one is discussed. The event-based corpuscular model gives a unified

  15. Evolutionary paths, applications and future development of discrete event simulation systems; Simulazione a eventi discreti: nuove linee di sviluppo e applicazioni

    Energy Technology Data Exchange (ETDEWEB)

    Garetti, M. [Milan Politecnico, Milan (Italy). Dipt. di Economia e Produzione; Bartolotta, A.

    2000-10-01

    The state of the art of discrete event simulation tools is presented with special reference to the application to the manufacturing systems area. After presenting the basics of discrete event computer simulation, the different steps to be followed for the successful use of simulation are defined and discussed. The evolution of software packages for discrete event simulation is also presented, highlighting main technological changes. Finally the future development lines of simulation are outlined. [Italian] Viene presentato lo stato dell'arte della simulazione a eventi discreti. Dopo una breve descrizione della tecnica della simulazione e della sua evoluzione, con un particolare riguardo alla simulazione dei sistemi produttivi, sono descritte le fasi della procedura da seguire per condurre unostudio di simulazione e i possibili approcci per la costruzione del modello. Viene infine descritta l'evoluzione dei principali pacchetti software di simulazione esistenti sul mercato.

  16. A Simbol-X Event Simulator

    International Nuclear Information System (INIS)

    Puccetti, S.; Giommi, P.; Fiore, F.

    2009-01-01

    The ASI Science Data Center (ASDC) has developed an X-ray event simulator to support users (and team members) in simulation of data taken with the two cameras on board the Simbol-X X-Ray Telescope. The Simbol-X simulator is very fast and flexible, compared to ray-tracing simulator. These properties make our simulator advantageous to support the user in planning proposals and comparing real data with the theoretical expectations and for a quick detection of unexpected features. We present here the simulator outline and a few examples of simulated data.

  17. A Simbol-X Event Simulator

    Science.gov (United States)

    Puccetti, S.; Fiore, F.; Giommi, P.

    2009-05-01

    The ASI Science Data Center (ASDC) has developed an X-ray event simulator to support users (and team members) in simulation of data taken with the two cameras on board the Simbol-X X-Ray Telescope. The Simbol-X simulator is very fast and flexible, compared to ray-tracing simulator. These properties make our simulator advantageous to support the user in planning proposals and comparing real data with the theoretical expectations and for a quick detection of unexpected features. We present here the simulator outline and a few examples of simulated data.

  18. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah; Carns, Philip; Ross, Robert; Li, Jianping Kelvin; Ma, Kwan-Liu

    2016-11-13

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has to gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a

  19. Simulation of tokamak runaway-electron events

    International Nuclear Information System (INIS)

    Bolt, H.; Miyahara, A.; Miyake, M.; Yamamoto, T.

    1987-08-01

    High energy runaway-electron events which can occur in tokamaks when the plasma hits the first wall are a critical issue for the materials selection of future devices. Runaway-electron events are simulated with an electron linear accelerator to better understand the observed runaway-electron damage to tokamak first wall materials and to consider the runaway-electron issue in further materials development and selection. The electron linear accelerator produces beam energies of 20 to 30 MeV at an integrated power input of up to 1.3 kW. Graphite, SiC + 2 % AlN, stainless steel, molybdenum and tungsten have been tested as bulk materials. To test the reliability of actively cooled systems under runaway-electron impact layer systems of graphite fixed to metal substrates have been tested. The irradiation resulted in damage to the metal compounds but left graphite and SiC + 2 % AlN without damage. Metal substrates of graphite - metal systems for actively cooled structures suffer severe damage unless thick graphite shielding is provided. (author)

  20. Hybrid modelling in discrete-event control system design

    NARCIS (Netherlands)

    Beek, van D.A.; Rooda, J.E.; Gordijn, S.H.F.; Borne, P.

    1996-01-01

    Simulation-based testing of discrete-event control systems can be advantageous. There is, however, a considerable difference between languages for real-time control and simulation languages. The Chi language, presented in this paper, is suited to specification and simulation of real-time control

  1. Use cases of discrete event simulation. Appliance and research

    Energy Technology Data Exchange (ETDEWEB)

    Bangsow, Steffen (ed.)

    2012-11-01

    Use Cases of Discrete Event Simulation. Includes case studies from various important industries such as automotive, aerospace, robotics, production industry. Written by leading experts in the field. Over the last decades Discrete Event Simulation has conquered many different application areas. This trend is, on the one hand, driven by an ever wider use of this technology in different fields of science and on the other hand by an incredibly creative use of available software programs through dedicated experts. This book contains articles from scientists and experts from 10 countries. They illuminate the width of application of this technology and the quality of problems solved using Discrete Event Simulation. Practical applications of simulation dominate in the present book. The book is aimed to researchers and students who deal in their work with Discrete Event Simulation and which want to inform them about current applications. By focusing on discrete event simulation, this book can also serve as an inspiration source for practitioners for solving specific problems during their work. Decision makers who deal with the question of the introduction of discrete event simulation for planning support and optimization this book provides a contribution to the orientation, what specific problems could be solved with the help of Discrete Event Simulation within the organization.

  2. Discrete Event Simulation for the Analysis of Artillery Fired Projectiles from Shore

    Science.gov (United States)

    2017-06-01

    model. 2.1 Discrete Event Simulation with Simkit Simkit is a library of classes and interfaces, written in Java , that support ease of implemen- tation...Simkit allows simulation modelers to break complex systems into components through a framework of Listener Event Graph Objects (LEGOs), described in...Classes A disadvantage to using Java Enum Types is the inability to change the values of Enum Type parameters while conducting a designed experiment

  3. Discrete event simulation of the ATLAS second level trigger

    International Nuclear Information System (INIS)

    Vermeulen, J.C.; Dankers, R.J.; Hunt, S.; Harris, F.; Hortnagl, C.; Erasov, A.; Bogaerts, A.

    1998-01-01

    Discrete event simulation is applied for determining the computing and networking resources needed for the ATLAS second level trigger. This paper discusses the techniques used and some of the results obtained so far for well defined laboratory configurations and for the full system

  4. Discrete Event Simulation Method as a Tool for Improvement of Manufacturing Systems

    Directory of Open Access Journals (Sweden)

    Adrian Kampa

    2017-02-01

    Full Text Available The problem of production flow in manufacturing systems is analyzed. The machines can be operated by workers or by robots, since breakdowns and human factors destabilize the production processes that robots are preferred to perform. The problem is how to determine the real difference in work efficiency between humans and robots. We present an analysis of the production efficiency and reliability of the press shop lines operated by human operators or industrial robots. This is a problem from the field of Operations Research for which the Discrete Event Simulation (DES method has been used. Three models have been developed, including the manufacturing line before and after robotization, taking into account stochastic parameters of availability and reliability of the machines, operators, and robots. We apply the OEE (Overall Equipment Effectiveness indicator to present how the availability, reliability, and quality parameters influence the performance of the workstations, especially in the short run and in the long run. In addition, the stability of the simulation model was analyzed. This approach enables a better representation of real manufacturing processes.

  5. Modeling a Million-Node Slim Fly Network Using Parallel Discrete-Event Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wolfe, Noah; Carothers, Christopher; Mubarak, Misbah; Ross, Robert; Carns, Philip

    2016-05-15

    As supercomputers close in on exascale performance, the increased number of processors and processing power translates to an increased demand on the underlying network interconnect. The Slim Fly network topology, a new lowdiameter and low-latency interconnection network, is gaining interest as one possible solution for next-generation supercomputing interconnect systems. In this paper, we present a high-fidelity Slim Fly it-level model leveraging the Rensselaer Optimistic Simulation System (ROSS) and Co-Design of Exascale Storage (CODES) frameworks. We validate our Slim Fly model with the Kathareios et al. Slim Fly model results provided at moderately sized network scales. We further scale the model size up to n unprecedented 1 million compute nodes; and through visualization of network simulation metrics such as link bandwidth, packet latency, and port occupancy, we get an insight into the network behavior at the million-node scale. We also show linear strong scaling of the Slim Fly model on an Intel cluster achieving a peak event rate of 36 million events per second using 128 MPI tasks to process 7 billion events. Detailed analysis of the underlying discrete-event simulation performance shows that a million-node Slim Fly model simulation can execute in 198 seconds on the Intel cluster.

  6. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    Energy Technology Data Exchange (ETDEWEB)

    Wilke, Jeremiah J [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Kenny, Joseph P. [Sandia National Laboratories (SNL-CA), Livermore, CA (United States)

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading framework allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.

  7. ATLAS simulated black hole event

    CERN Multimedia

    Pequenão, J

    2008-01-01

    The simulated collision event shown is viewed along the beampipe. The event is one in which a microscopic-black-hole was produced in the collision of two protons (not shown). The microscopic-black-hole decayed immediately into many particles. The colors of the tracks show different types of particles emerging from the collision (at the center).

  8. 3D Simulation of External Flooding Events for the RISMC Pathway

    International Nuclear Information System (INIS)

    Prescott, Steven; Mandelli, Diego; Sampath, Ramprasad; Smith, Curtis; Lin, Linyu

    2015-01-01

    Incorporating 3D simulations as part of the Risk-Informed Safety Margins Characterization (RISMIC) Toolkit allows analysts to obtain a more complete picture of complex system behavior for events including external plant hazards. External events such as flooding have become more important recently – however these can be analyzed with existing and validated simulated physics toolkits. In this report, we describe these approaches specific to flooding-based analysis using an approach called Smoothed Particle Hydrodynamics. The theory, validation, and example applications of the 3D flooding simulation are described. Integrating these 3D simulation methods into computational risk analysis provides a spatial/visual aspect to the design, improves the realism of results, and can prove visual understanding to validate the analysis of flooding.

  9. 3D Simulation of External Flooding Events for the RISMC Pathway

    Energy Technology Data Exchange (ETDEWEB)

    Prescott, Steven [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Sampath, Ramprasad [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States); Lin, Linyu [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    Incorporating 3D simulations as part of the Risk-Informed Safety Margins Characterization (RISMIC) Toolkit allows analysts to obtain a more complete picture of complex system behavior for events including external plant hazards. External events such as flooding have become more important recently – however these can be analyzed with existing and validated simulated physics toolkits. In this report, we describe these approaches specific to flooding-based analysis using an approach called Smoothed Particle Hydrodynamics. The theory, validation, and example applications of the 3D flooding simulation are described. Integrating these 3D simulation methods into computational risk analysis provides a spatial/visual aspect to the design, improves the realism of results, and can prove visual understanding to validate the analysis of flooding.

  10. PRODUCTION SYSTEM MODELING AND SIMULATION USING DEVS FORMALISM

    OpenAIRE

    Amaya Hurtado, Darío; Castillo Estepa, Ricardo Andrés; Avilés Montaño, Óscar Fernando; Ramos Sandoval, Olga Lucía

    2014-01-01

    This article presents the Discrete Event System Specification (DEVS) formalism, in their atomic and coupled configurations; it is used for discrete event systems modeling and simulation. Initially this work describes the analysis of discrete event systems concepts and its applicability. Then a comprehensive description of the DEVS formalism structure is presented, in order to model and simulate an industrial process, taking into account changes in parameters such as process service time, each...

  11. Parallel discrete-event simulation of FCFS stochastic queueing networks

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Physical systems are inherently parallel. Intuition suggests that simulations of these systems may be amenable to parallel execution. The parallel execution of a discrete-event simulation requires careful synchronization of processes in order to ensure the execution's correctness; this synchronization can degrade performance. Largely negative results were recently reported in a study which used a well-known synchronization method on queueing network simulations. Discussed here is a synchronization method (appointments), which has proven itself to be effective on simulations of FCFS queueing networks. The key concept behind appointments is the provision of lookahead. Lookahead is a prediction on a processor's future behavior, based on an analysis of the processor's simulation state. It is shown how lookahead can be computed for FCFS queueing network simulations, give performance data that demonstrates the method's effectiveness under moderate to heavy loads, and discuss performance tradeoffs between the quality of lookahead, and the cost of computing lookahead.

  12. Event-by-event simulation of single-neutron experiments to test uncertainty relations

    International Nuclear Information System (INIS)

    Raedt, H De; Michielsen, K

    2014-01-01

    Results from a discrete-event simulation of a recent single-neutron experiment that tests Ozawa's generalization of Heisenberg's uncertainty relation are presented. The event-based simulation algorithm reproduces the results of the quantum theoretical description of the experiment but does not require the knowledge of the solution of a wave equation, nor does it rely on detailed concepts of quantum theory. In particular, the data from these non-quantum simulations satisfy uncertainty relations derived in the context of quantum theory. (paper)

  13. Use Cases of Discrete Event Simulation Appliance and Research

    CERN Document Server

    2012-01-01

    Over the last decades Discrete Event Simulation has conquered many different application areas. This trend is, on the one hand, driven by an ever wider use of this technology in different fields of science and on the other hand by an incredibly creative use of available software programs through dedicated experts. This book contains articles from scientists and experts from 10 countries. They illuminate the width of application of this technology and the quality of problems solved using Discrete Event Simulation. Practical applications of simulation dominate in the present book.   The book is aimed to researchers and students who deal in their work with Discrete Event Simulation and which want to inform them about current applications. By focusing on discrete event simulation, this book can also serve as an inspiration source for practitioners for solving specific problems during their work. Decision makers who deal with the question of the introduction of discrete event simulation for planning support and o...

  14. Event-by-event simulation of a quantum delayed-choice experiment

    NARCIS (Netherlands)

    Donker, Hylke C.; De Raedt, Hans; Michielsen, Kristel

    2014-01-01

    The quantum delayed-choice experiment of Tang et al. (2012) is simulated on the level of individual events without making reference to concepts of quantum theory or without solving a wave equation. The simulation results are in excellent agreement with the quantum theoretical predictions of this

  15. Analysis of manufacturing based on object oriented discrete event simulation

    Directory of Open Access Journals (Sweden)

    Eirik Borgen

    1990-01-01

    Full Text Available This paper describes SIMMEK, a computer-based tool for performing analysis of manufacturing systems, developed at the Production Engineering Laboratory, NTH-SINTEF. Its main use will be in analysis of job shop type of manufacturing. But certain facilities make it suitable for FMS as well as a production line manufacturing. This type of simulation is very useful in analysis of any types of changes that occur in a manufacturing system. These changes may be investments in new machines or equipment, a change in layout, a change in product mix, use of late shifts, etc. The effects these changes have on for instance the throughput, the amount of VIP, the costs or the net profit, can be analysed. And this can be done before the changes are made, and without disturbing the real system. Simulation takes into consideration, unlike other tools for analysis of manufacturing systems, uncertainty in arrival rates, process and operation times, and machine availability. It also shows the interaction effects a job which is late in one machine, has on the remaining machines in its route through the layout. It is these effects that cause every production plan not to be fulfilled completely. SIMMEK is based on discrete event simulation, and the modeling environment is object oriented. The object oriented models are transformed by an object linker into data structures executable by the simulation kernel. The processes of the entity objects, i.e. the products, are broken down to events and put into an event list. The user friendly graphical modeling environment makes it possible for end users to build models in a quick and reliable way, using terms from manufacturing. Various tests and a check of model logic are helpful functions when testing validity of the models. Integration with software packages, with business graphics and statistical functions, is convenient in the result presentation phase.

  16. Desktop Modeling and Simulation: Parsimonious, yet Effective Discrete-Event Simulation Analysis

    Science.gov (United States)

    Bradley, James R.

    2012-01-01

    This paper evaluates how quickly students can be trained to construct useful discrete-event simulation models using Excel The typical supply chain used by many large national retailers is described, and an Excel-based simulation model is constructed of it The set of programming and simulation skills required for development of that model are then determined we conclude that six hours of training are required to teach the skills to MBA students . The simulation presented here contains all fundamental functionallty of a simulation model, and so our result holds for any discrete-event simulation model. We argue therefore that Industry workers with the same technical skill set as students having completed one year in an MBA program can be quickly trained to construct simulation models. This result gives credence to the efficacy of Desktop Modeling and Simulation whereby simulation analyses can be quickly developed, run, and analyzed with widely available software, namely Excel.

  17. An Advanced Simulation Framework for Parallel Discrete-Event Simulation

    Science.gov (United States)

    Li, P. P.; Tyrrell, R. Yeung D.; Adhami, N.; Li, T.; Henry, H.

    1994-01-01

    Discrete-event simulation (DEVS) users have long been faced with a three-way trade-off of balancing execution time, model fidelity, and number of objects simulated. Because of the limits of computer processing power the analyst is often forced to settle for less than desired performances in one or more of these areas.

  18. Agent Based Simulation of Group Emotions Evolution and Strategy Intervention in Extreme Events

    Directory of Open Access Journals (Sweden)

    Bo Li

    2014-01-01

    Full Text Available Agent based simulation method has become a prominent approach in computational modeling and analysis of public emergency management in social science research. The group emotions evolution, information diffusion, and collective behavior selection make extreme incidents studies a complex system problem, which requires new methods for incidents management and strategy evaluation. This paper studies the group emotion evolution and intervention strategy effectiveness using agent based simulation method. By employing a computational experimentation methodology, we construct the group emotion evolution as a complex system and test the effects of three strategies. In addition, the events-chain model is proposed to model the accumulation influence of the temporal successive events. Each strategy is examined through three simulation experiments, including two make-up scenarios and a real case study. We show how various strategies could impact the group emotion evolution in terms of the complex emergence and emotion accumulation influence in extreme events. This paper also provides an effective method of how to use agent-based simulation for the study of complex collective behavior evolution problem in extreme incidents, emergency, and security study domains.

  19. Parallelized event chain algorithm for dense hard sphere and polymer systems

    International Nuclear Information System (INIS)

    Kampmann, Tobias A.; Boltz, Horst-Holger; Kierfeld, Jan

    2015-01-01

    We combine parallelization and cluster Monte Carlo for hard sphere systems and present a parallelized event chain algorithm for the hard disk system in two dimensions. For parallelization we use a spatial partitioning approach into simulation cells. We find that it is crucial for correctness to ensure detailed balance on the level of Monte Carlo sweeps by drawing the starting sphere of event chains within each simulation cell with replacement. We analyze the performance gains for the parallelized event chain and find a criterion for an optimal degree of parallelization. Because of the cluster nature of event chain moves massive parallelization will not be optimal. Finally, we discuss first applications of the event chain algorithm to dense polymer systems, i.e., bundle-forming solutions of attractive semiflexible polymers

  20. Manufacturing plant performance evaluation by discrete event simulation

    International Nuclear Information System (INIS)

    Rosli Darmawan; Mohd Rasid Osman; Rosnah Mohd Yusuff; Napsiah Ismail; Zulkiflie Leman

    2002-01-01

    A case study was conducted to evaluate the performance of a manufacturing plant using discrete event simulation technique. The study was carried out on animal feed production plant. Sterifeed plant at Malaysian Institute for Nuclear Technology Research (MINT), Selangor, Malaysia. The plant was modelled base on the actual manufacturing activities recorded by the operators. The simulation was carried out using a discrete event simulation software. The model was validated by comparing the simulation results with the actual operational data of the plant. The simulation results show some weaknesses with the current plant design and proposals were made to improve the plant performance. (Author)

  1. Synchronization of autonomous objects in discrete event simulation

    Science.gov (United States)

    Rogers, Ralph V.

    1990-01-01

    Autonomous objects in event-driven discrete event simulation offer the potential to combine the freedom of unrestricted movement and positional accuracy through Euclidean space of time-driven models with the computational efficiency of event-driven simulation. The principal challenge to autonomous object implementation is object synchronization. The concept of a spatial blackboard is offered as a potential methodology for synchronization. The issues facing implementation of a spatial blackboard are outlined and discussed.

  2. A Key Event Path Analysis Approach for Integrated Systems

    Directory of Open Access Journals (Sweden)

    Jingjing Liao

    2012-01-01

    Full Text Available By studying the key event paths of probabilistic event structure graphs (PESGs, a key event path analysis approach for integrated system models is proposed. According to translation rules concluded from integrated system architecture descriptions, the corresponding PESGs are constructed from the colored Petri Net (CPN models. Then the definitions of cycle event paths, sequence event paths, and key event paths are given. Whereafter based on the statistic results after the simulation of CPN models, key event paths are found out by the sensitive analysis approach. This approach focuses on the logic structures of CPN models, which is reliable and could be the basis of structured analysis for discrete event systems. An example of radar model is given to characterize the application of this approach, and the results are worthy of trust.

  3. Performance and cost evaluation of health information systems using micro-costing and discrete-event simulation.

    Science.gov (United States)

    Rejeb, Olfa; Pilet, Claire; Hamana, Sabri; Xie, Xiaolan; Durand, Thierry; Aloui, Saber; Doly, Anne; Biron, Pierre; Perrier, Lionel; Augusto, Vincent

    2018-06-01

    Innovation and health-care funding reforms have contributed to the deployment of Information and Communication Technology (ICT) to improve patient care. Many health-care organizations considered the application of ICT as a crucial key to enhance health-care management. The purpose of this paper is to provide a methodology to assess the organizational impact of high-level Health Information System (HIS) on patient pathway. We propose an integrated performance evaluation of HIS approach through the combination of formal modeling using the Architecture of Integrated Information Systems (ARIS) models, a micro-costing approach for cost evaluation, and a Discrete-Event Simulation (DES) approach. The methodology is applied to the consultation for cancer treatment process. Simulation scenarios are established to conclude about the impact of HIS on patient pathway. We demonstrated that although high level HIS lengthen the consultation, occupation rate of oncologists are lower and quality of service is higher (through the number of available information accessed during the consultation to formulate the diagnostic). The provided method allows also to determine the most cost-effective ICT elements to improve the care process quality while minimizing costs. The methodology is flexible enough to be applied to other health-care systems.

  4. Immune system simulation online

    DEFF Research Database (Denmark)

    Rapin, Nicolas; Lund, Ole; Castiglione, Filippo

    2011-01-01

    MOTIVATION: The recognition of antigenic peptides is a major event of an immune response. In current mesoscopic-scale simulators of the immune system, this crucial step has been modeled in a very approximated way. RESULTS: We have equipped an agent-based model of the immune system with immuno...

  5. Estimating rare events in biochemical systems using conditional sampling

    Science.gov (United States)

    Sundar, V. S.

    2017-01-01

    The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.

  6. Discrete event simulation in an artificial intelligence environment: Some examples

    International Nuclear Information System (INIS)

    Roberts, D.J.; Farish, T.

    1991-01-01

    Several Los Alamos National Laboratory (LANL) object-oriented discrete-event simulation efforts have been completed during the past three years. One of these systems has been put into production and has a growing customer base. Another (started two years earlier than the first project) was completed but has not yet been used. This paper will describe these simulation projects. Factors which were pertinent to the success of the one project, and to the failure of the second project will be discussed (success will be measured as the extent to which the simulation model was used as originally intended). 5 figs

  7. Discrete-Event Simulation with Agents for Modeling of Dynamic Asymmetric Threats in Maritime Security

    National Research Council Canada - National Science Library

    Ng, Chee W

    2007-01-01

    .... Discrete-event simulation (DES) was used to simulate a typical port-security, local, waterside-threat response model and to test the adaptive response of asymmetric threats in reaction to port-security procedures, while a multi-agent system (MAS...

  8. Quality Improvement With Discrete Event Simulation: A Primer for Radiologists.

    Science.gov (United States)

    Booker, Michael T; O'Connell, Ryan J; Desai, Bhushan; Duddalwar, Vinay A

    2016-04-01

    The application of simulation software in health care has transformed quality and process improvement. Specifically, software based on discrete-event simulation (DES) has shown the ability to improve radiology workflows and systems. Nevertheless, despite the successful application of DES in the medical literature, the power and value of simulation remains underutilized. For this reason, the basics of DES modeling are introduced, with specific attention to medical imaging. In an effort to provide readers with the tools necessary to begin their own DES analyses, the practical steps of choosing a software package and building a basic radiology model are discussed. In addition, three radiology system examples are presented, with accompanying DES models that assist in analysis and decision making. Through these simulations, we provide readers with an understanding of the theory, requirements, and benefits of implementing DES in their own radiology practices. Copyright © 2016 American College of Radiology. All rights reserved.

  9. Discretely Integrated Condition Event (DICE) Simulation for Pharmacoeconomics.

    Science.gov (United States)

    Caro, J Jaime

    2016-07-01

    Several decision-analytic modeling techniques are in use for pharmacoeconomic analyses. Discretely integrated condition event (DICE) simulation is proposed as a unifying approach that has been deliberately designed to meet the modeling requirements in a straightforward transparent way, without forcing assumptions (e.g., only one transition per cycle) or unnecessary complexity. At the core of DICE are conditions that represent aspects that persist over time. They have levels that can change and many may coexist. Events reflect instantaneous occurrences that may modify some conditions or the timing of other events. The conditions are discretely integrated with events by updating their levels at those times. Profiles of determinant values allow for differences among patients in the predictors of the disease course. Any number of valuations (e.g., utility, cost, willingness-to-pay) of conditions and events can be applied concurrently in a single run. A DICE model is conveniently specified in a series of tables that follow a consistent format and the simulation can be implemented fully in MS Excel, facilitating review and validation. DICE incorporates both state-transition (Markov) models and non-resource-constrained discrete event simulation in a single formulation; it can be executed as a cohort or a microsimulation; and deterministically or stochastically.

  10. The null-event method in computer simulation

    International Nuclear Information System (INIS)

    Lin, S.L.

    1978-01-01

    The simulation of collisions of ions moving under the influence of an external field through a neutral gas to non-zero temperatures is discussed as an example of computer models of processes in which a probe particle undergoes a series of interactions with an ensemble of other particles, such that the frequency and outcome of the events depends on internal properties of the second particles. The introduction of null events removes the need for much complicated algebra, leads to a more efficient simulation and reduces the likelihood of logical error. (Auth.)

  11. Manual for the Jet Event and Background Simulation Library

    Energy Technology Data Exchange (ETDEWEB)

    Heinz, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Soltz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Angerami, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-11

    Jets are the collimated streams of particles resulting from hard scattering in the initial state of high-energy collisions. In heavy-ion collisions, jets interact with the quark-gluon plasma (QGP) before freezeout, providing a probe into the internal structure and properties of the QGP. In order to study jets, background must be subtracted from the measured event, potentially introducing a bias. We aim to understand and quantify this subtraction bias. PYTHIA, a library to simulate pure jet events, is used to simulate a model for a signature with one pure jet (a photon) and one quenched jet, where all quenched particle momenta are reduced by a user-de ned constant fraction. Background for the event is simulated using multiplicity values generated by the TRENTO initial state model of heavy-ion collisions fed into a thermal model consisting of a 3-dimensional Boltzmann distribution for particle types and momenta. Data from the simulated events is used to train a statistical model, which computes a posterior distribution of the quench factor for a data set. The model was tested rst on pure jet events and then on full events including the background. This model will allow for a quantitative determination of biases induced by various methods of background subtraction.

  12. Discrete event simulation and the resultant data storage system response in the operational mission environment of Jupiter-Saturn /Voyager/ spacecraft

    Science.gov (United States)

    Mukhopadhyay, A. K.

    1978-01-01

    The Data Storage Subsystem Simulator (DSSSIM) simulating (by ground software) occurrence of discrete events in the Voyager mission is described. Functional requirements for Data Storage Subsystems (DSS) simulation are discussed, and discrete event simulation/DSSSIM processing is covered. Four types of outputs associated with a typical DSSSIM run are presented, and DSSSIM limitations and constraints are outlined.

  13. Nonstochastic Analysis of Manufacturing Systems Using Timed-Event Graphs

    DEFF Research Database (Denmark)

    Hulgaard, Henrik; Amon, Tod

    1996-01-01

    Using automated methods to analyze the temporal behavior ofmanufacturing systems has proven to be essential and quite beneficial.Popular methodologies include Queueing networks, Markov chains,simulation techniques, and discrete event systems (such as Petrinets). These methodologies are primarily...

  14. Parallel discrete event simulation

    NARCIS (Netherlands)

    Overeinder, B.J.; Hertzberger, L.O.; Sloot, P.M.A.; Withagen, W.J.

    1991-01-01

    In simulating applications for execution on specific computing systems, the simulation performance figures must be known in a short period of time. One basic approach to the problem of reducing the required simulation time is the exploitation of parallelism. However, in parallelizing the simulation

  15. Workflow in clinical trial sites & its association with near miss events for data quality: ethnographic, workflow & systems simulation.

    Science.gov (United States)

    de Carvalho, Elias Cesar Araujo; Batilana, Adelia Portero; Claudino, Wederson; Reis, Luiz Fernando Lima; Schmerling, Rafael A; Shah, Jatin; Pietrobon, Ricardo

    2012-01-01

    With the exponential expansion of clinical trials conducted in (Brazil, Russia, India, and China) and VISTA (Vietnam, Indonesia, South Africa, Turkey, and Argentina) countries, corresponding gains in cost and enrolment efficiency quickly outpace the consonant metrics in traditional countries in North America and European Union. However, questions still remain regarding the quality of data being collected in these countries. We used ethnographic, mapping and computer simulation studies to identify/address areas of threat to near miss events for data quality in two cancer trial sites in Brazil. Two sites in Sao Paolo and Rio Janeiro were evaluated using ethnographic observations of workflow during subject enrolment and data collection. Emerging themes related to threats to near miss events for data quality were derived from observations. They were then transformed into workflows using UML-AD and modeled using System Dynamics. 139 tasks were observed and mapped through the ethnographic study. The UML-AD detected four major activities in the workflow evaluation of potential research subjects prior to signature of informed consent, visit to obtain subject́s informed consent, regular data collection sessions following study protocol and closure of study protocol for a given project. Field observations pointed to three major emerging themes: (a) lack of standardized process for data registration at source document, (b) multiplicity of data repositories and (c) scarcity of decision support systems at the point of research intervention. Simulation with policy model demonstrates a reduction of the rework problem. Patterns of threats to data quality at the two sites were similar to the threats reported in the literature for American sites. The clinical trial site managers need to reorganize staff workflow by using information technology more efficiently, establish new standard procedures and manage professionals to reduce near miss events and save time/cost. Clinical trial

  16. Examining Passenger Flow Choke Points at Airports Using Discrete Event Simulation

    Science.gov (United States)

    Brown, Jeremy R.; Madhavan, Poomima

    2011-01-01

    The movement of passengers through an airport quickly, safely, and efficiently is the main function of the various checkpoints (check-in, security. etc) found in airports. Human error combined with other breakdowns in the complex system of the airport can disrupt passenger flow through the airport leading to lengthy waiting times, missing luggage and missed flights. In this paper we present a model of passenger flow through an airport using discrete event simulation that will provide a closer look into the possible reasons for breakdowns and their implications for passenger flow. The simulation is based on data collected at Norfolk International Airport (ORF). The primary goal of this simulation is to present ways to optimize the work force to keep passenger flow smooth even during peak travel times and for emergency preparedness at ORF in case of adverse events. In this simulation we ran three different scenarios: real world, increased check-in stations, and multiple waiting lines. Increased check-in stations increased waiting time and instantaneous utilization. while the multiple waiting lines decreased both the waiting time and instantaneous utilization. This simulation was able to show how different changes affected the passenger flow through the airport.

  17. Modeling and simulation of single-event effect in CMOS circuit

    International Nuclear Information System (INIS)

    Yue Suge; Zhang Xiaolin; Zhao Yuanfu; Liu Lin; Wang Hanning

    2015-01-01

    This paper reviews the status of research in modeling and simulation of single-event effects (SEE) in digital devices and integrated circuits. After introducing a brief historical overview of SEE simulation, different level simulation approaches of SEE are detailed, including material-level physical simulation where two primary methods by which ionizing radiation releases charge in a semiconductor device (direct ionization and indirect ionization) are introduced, device-level simulation where the main emerging physical phenomena affecting nanometer devices (bipolar transistor effect, charge sharing effect) and the methods envisaged for taking them into account are focused on, and circuit-level simulation where the methods for predicting single-event response about the production and propagation of single-event transients (SETs) in sequential and combinatorial logic are detailed, as well as the soft error rate trends with scaling are particularly addressed. (review)

  18. Power System Event Ranking Using a New Linear Parameter-Varying Modeling with a Wide Area Measurement System-Based Approach

    Directory of Open Access Journals (Sweden)

    Mohammad Bagher Abolhasani Jabali

    2017-07-01

    Full Text Available Detecting critical power system events for Dynamic Security Assessment (DSA is required for reliability improvement. The approach proposed in this paper investigates the effects of events on dynamic behavior during nonlinear system response while common approaches use steady-state conditions after events. This paper presents some new and enhanced indices for event ranking based on time-domain simulation and polytopic linear parameter-varying (LPV modeling of a power system. In the proposed approach, a polytopic LPV representation is generated via linearization about some points of the nonlinear dynamic behavior of power system using wide-area measurement system (WAMS concepts and then event ranking is done based on the frequency response of the system models on the vertices. Therefore, the nonlinear behaviors of the system in the time of fault occurrence are considered for events ranking. The proposed algorithm is applied to a power system using nonlinear simulation. The comparison of the results especially in different fault conditions shows the advantages of the proposed approach and indices.

  19. Disaster Response Modeling Through Discrete-Event Simulation

    Science.gov (United States)

    Wang, Jeffrey; Gilmer, Graham

    2012-01-01

    Organizations today are required to plan against a rapidly changing, high-cost environment. This is especially true for first responders to disasters and other incidents, where critical decisions must be made in a timely manner to save lives and resources. Discrete-event simulations enable organizations to make better decisions by visualizing complex processes and the impact of proposed changes before they are implemented. A discrete-event simulation using Simio software has been developed to effectively analyze and quantify the imagery capabilities of domestic aviation resources conducting relief missions. This approach has helped synthesize large amounts of data to better visualize process flows, manage resources, and pinpoint capability gaps and shortfalls in disaster response scenarios. Simulation outputs and results have supported decision makers in the understanding of high risk locations, key resource placement, and the effectiveness of proposed improvements.

  20. In situ simulation: Taking reported critical incidents and adverse events back to the clinic

    DEFF Research Database (Denmark)

    Juul, Jonas; Paltved, Charlotte; Krogh, Kristian

    2014-01-01

    for content analysis4 and thematic analysis5. Medical experts and simulation faculty will design scenarios for in situ simulation training based on the analysis. Short-term observations using time logs will be performed along with interviews with key informants at the departments. Video data will be collected...... improve patient safety if coupled with training and organisational support2. Insight into the nature of reported critical incidents and adverse events can be used in writing in situ simulation scenarios and thus lead to interventions that enhance patient safety. The patient safety literature emphasises...... well-developed non-technical skills in preventing medical errors3. Furthermore, critical incidents and adverse events reporting systems comprise a knowledgebase to gain in-depth insights into patient safety issues. This study explores the use of critical incidents and adverse events reports to inform...

  1. Event-by-event simulation of Einstein-Podolsky-Rosen-Bohm experiments

    NARCIS (Netherlands)

    Zhao, Shuang; De Raedt, Hans; Michielsen, Kristel

    We construct an event-based computer simulation model of the Einstein-Podolsky-Rosen-Bohm experiments with photons. The algorithm is a one-to-one copy of the data gathering and analysis procedures used in real laboratory experiments. We consider two types of experiments, those with a source emitting

  2. Estimating ICU bed capacity using discrete event simulation.

    Science.gov (United States)

    Zhu, Zhecheng; Hen, Bee Hoon; Teow, Kiok Liang

    2012-01-01

    The intensive care unit (ICU) in a hospital caters for critically ill patients. The number of the ICU beds has a direct impact on many aspects of hospital performance. Lack of the ICU beds may cause ambulance diversion and surgery cancellation, while an excess of ICU beds may cause a waste of resources. This paper aims to develop a discrete event simulation (DES) model to help the healthcare service providers determine the proper ICU bed capacity which strikes the balance between service level and cost effectiveness. The DES model is developed to reflect the complex patient flow of the ICU system. Actual operational data, including emergency arrivals, elective arrivals and length of stay, are directly fed into the DES model to capture the variations in the system. The DES model is validated by open box test and black box test. The validated model is used to test two what-if scenarios which the healthcare service providers are interested in: the proper number of the ICU beds in service to meet the target rejection rate and the extra ICU beds in service needed to meet the demand growth. A 12-month period of actual operational data was collected from an ICU department with 13 ICU beds in service. Comparison between the simulation results and the actual situation shows that the DES model accurately captures the variations in the system, and the DES model is flexible to simulate various what-if scenarios. DES helps the healthcare service providers describe the current situation, and simulate the what-if scenarios for future planning.

  3. The cost of conservative synchronization in parallel discrete event simulations

    Science.gov (United States)

    Nicol, David M.

    1990-01-01

    The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor.

  4. Modeling extreme (Carrington-type) space weather events using three-dimensional MHD code simulations

    Science.gov (United States)

    Ngwira, C. M.; Pulkkinen, A. A.; Kuznetsova, M. M.; Glocer, A.

    2013-12-01

    There is growing concern over possible severe societal consequences related to adverse space weather impacts on man-made technological infrastructure and systems. In the last two decades, significant progress has been made towards the modeling of space weather events. Three-dimensional (3-D) global magnetohydrodynamics (MHD) models have been at the forefront of this transition, and have played a critical role in advancing our understanding of space weather. However, the modeling of extreme space weather events is still a major challenge even for existing global MHD models. In this study, we introduce a specially adapted University of Michigan 3-D global MHD model for simulating extreme space weather events that have a ground footprint comparable (or larger) to the Carrington superstorm. Results are presented for an initial simulation run with ``very extreme'' constructed/idealized solar wind boundary conditions driving the magnetosphere. In particular, we describe the reaction of the magnetosphere-ionosphere system and the associated ground induced geoelectric field to such extreme driving conditions. We also discuss the results and what they might mean for the accuracy of the simulations. The model is further tested using input data for an observed space weather event to verify the MHD model consistence and to draw guidance for future work. This extreme space weather MHD model is designed specifically for practical application to the modeling of extreme geomagnetically induced electric fields, which can drive large currents in earth conductors such as power transmission grids.

  5. Modeling Temporal Processes in Early Spacecraft Design: Application of Discrete-Event Simulations for Darpa's F6 Program

    Science.gov (United States)

    Dubos, Gregory F.; Cornford, Steven

    2012-01-01

    While the ability to model the state of a space system over time is essential during spacecraft operations, the use of time-based simulations remains rare in preliminary design. The absence of the time dimension in most traditional early design tools can however become a hurdle when designing complex systems whose development and operations can be disrupted by various events, such as delays or failures. As the value delivered by a space system is highly affected by such events, exploring the trade space for designs that yield the maximum value calls for the explicit modeling of time.This paper discusses the use of discrete-event models to simulate spacecraft development schedule as well as operational scenarios and on-orbit resources in the presence of uncertainty. It illustrates how such simulations can be utilized to support trade studies, through the example of a tool developed for DARPA's F6 program to assist the design of "fractionated spacecraft".

  6. Asynchronous sampled-data approach for event-triggered systems

    Science.gov (United States)

    Mahmoud, Magdi S.; Memon, Azhar M.

    2017-11-01

    While aperiodically triggered network control systems save a considerable amount of communication bandwidth, they also pose challenges such as coupling between control and event-condition design, optimisation of the available resources such as control, communication and computation power, and time-delays due to computation and communication network. With this motivation, the paper presents separate designs of control and event-triggering mechanism, thus simplifying the overall analysis, asynchronous linear quadratic Gaussian controller which tackles delays and aperiodic nature of transmissions, and a novel event mechanism which compares the cost of the aperiodic system against a reference periodic implementation. The proposed scheme is simulated on a linearised wind turbine model for pitch angle control and the results show significant improvement against the periodic counterpart.

  7. Statistical and Probabilistic Extensions to Ground Operations' Discrete Event Simulation Modeling

    Science.gov (United States)

    Trocine, Linda; Cummings, Nicholas H.; Bazzana, Ashley M.; Rychlik, Nathan; LeCroy, Kenneth L.; Cates, Grant R.

    2010-01-01

    NASA's human exploration initiatives will invest in technologies, public/private partnerships, and infrastructure, paving the way for the expansion of human civilization into the solar system and beyond. As it is has been for the past half century, the Kennedy Space Center will be the embarkation point for humankind's journey into the cosmos. Functioning as a next generation space launch complex, Kennedy's launch pads, integration facilities, processing areas, launch and recovery ranges will bustle with the activities of the world's space transportation providers. In developing this complex, KSC teams work through the potential operational scenarios: conducting trade studies, planning and budgeting for expensive and limited resources, and simulating alternative operational schemes. Numerous tools, among them discrete event simulation (DES), were matured during the Constellation Program to conduct such analyses with the purpose of optimizing the launch complex for maximum efficiency, safety, and flexibility while minimizing life cycle costs. Discrete event simulation is a computer-based modeling technique for complex and dynamic systems where the state of the system changes at discrete points in time and whose inputs may include random variables. DES is used to assess timelines and throughput, and to support operability studies and contingency analyses. It is applicable to any space launch campaign and informs decision-makers of the effects of varying numbers of expensive resources and the impact of off nominal scenarios on measures of performance. In order to develop representative DES models, methods were adopted, exploited, or created to extend traditional uses of DES. The Delphi method was adopted and utilized for task duration estimation. DES software was exploited for probabilistic event variation. A roll-up process was used, which was developed to reuse models and model elements in other less - detailed models. The DES team continues to innovate and expand

  8. Analysis hierarchical model for discrete event systems

    Science.gov (United States)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  9. OVNI: a full-size real-time power system simulator

    Energy Technology Data Exchange (ETDEWEB)

    Marti, J. R.; Linares, L. R.; Rosales, R.; Dommel, H. W. [British Columbia Univ., Vancouver, BC (Canada)

    1997-12-31

    The concept and work-in-progress to develop a computer-based power system simulator that would mimic as closely as possible the behaviour of an actual power system, was described. The simulator, dubbed OVNI for Object Virtual Network Integrator, is capable of running continuously. It produces at each discreet time instant, the correct voltages and currents in a power system. OVNI is being implemented using a network of off-the-shelf Pentium Pro 200 MHz workstations. The Ada 95 language is used to satisfy object-oriented requirements and provide the code with the reliability required for mission-critical applications. An important characteristic of OVNI is its fully graphical and integrated simulation environment. System events can be directly applied to the simulator and outputs probed as the simulator is running. Input events can originate from user action or directly through A/D boards. Output probes can also be directed to the screen as running plots, or forwarded through D/A boards. 6 refs., 6 figs.

  10. OVNI: a full-size real-time power system simulator

    Energy Technology Data Exchange (ETDEWEB)

    Marti, J R; Linares, L R; Rosales, R; Dommel, H W [British Columbia Univ., Vancouver, BC (Canada)

    1998-12-31

    The concept and work-in-progress to develop a computer-based power system simulator that would mimic as closely as possible the behaviour of an actual power system, was described. The simulator, dubbed OVNI for Object Virtual Network Integrator, is capable of running continuously. It produces at each discreet time instant, the correct voltages and currents in a power system. OVNI is being implemented using a network of off-the-shelf Pentium Pro 200 MHz workstations. The Ada 95 language is used to satisfy object-oriented requirements and provide the code with the reliability required for mission-critical applications. An important characteristic of OVNI is its fully graphical and integrated simulation environment. System events can be directly applied to the simulator and outputs probed as the simulator is running. Input events can originate from user action or directly through A/D boards. Output probes can also be directed to the screen as running plots, or forwarded through D/A boards. 6 refs., 6 figs.

  11. Three Dimensional Simulation of the Baneberry Nuclear Event

    Science.gov (United States)

    Lomov, Ilya N.; Antoun, Tarabay H.; Wagoner, Jeff; Rambo, John T.

    2004-07-01

    Baneberry, a 10-kiloton nuclear event, was detonated at a depth of 278 m at the Nevada Test Site on December 18, 1970. Shortly after detonation, radioactive gases emanating from the cavity were released into the atmosphere through a shock-induced fissure near surface ground zero. Extensive geophysical investigations, coupled with a series of 1D and 2D computational studies were used to reconstruct the sequence of events that led to the catastrophic failure. However, the geological profile of the Baneberry site is complex and inherently three-dimensional, which meant that some geological features had to be simplified or ignored in the 2D simulations. This left open the possibility that features unaccounted for in the 2D simulations could have had an important influence on the eventual containment failure of the Baneberry event. This paper presents results from a high-fidelity 3D Baneberry simulation based on the most accurate geologic and geophysical data available. The results are compared with available data, and contrasted against the results of the previous 2D computational studies.

  12. Enabling parallel simulation of large-scale HPC network systems

    International Nuclear Information System (INIS)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; Carns, Philip

    2016-01-01

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks used in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations

  13. Optimization of Operations Resources via Discrete Event Simulation Modeling

    Science.gov (United States)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  14. StochKit2: software for discrete stochastic simulation of biochemical systems with events.

    Science.gov (United States)

    Sanft, Kevin R; Wu, Sheng; Roh, Min; Fu, Jin; Lim, Rone Kwei; Petzold, Linda R

    2011-09-01

    StochKit2 is the first major upgrade of the popular StochKit stochastic simulation software package. StochKit2 provides highly efficient implementations of several variants of Gillespie's stochastic simulation algorithm (SSA), and tau-leaping with automatic step size selection. StochKit2 features include automatic selection of the optimal SSA method based on model properties, event handling, and automatic parallelism on multicore architectures. The underlying structure of the code has been completely updated to provide a flexible framework for extending its functionality. StochKit2 runs on Linux/Unix, Mac OS X and Windows. It is freely available under GPL version 3 and can be downloaded from http://sourceforge.net/projects/stochkit/. petzold@engineering.ucsb.edu.

  15. A participative and facilitative conceptual modelling framework for discrete event simulation studies in healthcare

    OpenAIRE

    Kotiadis, Kathy; Tako, Antuela; Vasilakis, Christos

    2014-01-01

    Existing approaches to conceptual modelling (CM) in discrete-event simulation do not formally support the participation of a group of stakeholders. Simulation in healthcare can benefit from stakeholder participation as it makes possible to share multiple views and tacit knowledge from different parts of the system. We put forward a framework tailored to healthcare that supports the interaction of simulation modellers with a group of stakeholders to arrive at a common conceptual model. The fra...

  16. Simulation Based Optimization for World Line Card Production System

    Directory of Open Access Journals (Sweden)

    Sinan APAK

    2012-07-01

    Full Text Available Simulation based decision support system is one of the commonly used tool to examine complex production systems. The simulation approach provides process modules which can be adjusted with certain parameters by using data relatively easily obtainable in production process. World Line Card production system simulation is developed to evaluate the optimality of existing production line via using discrete event simulation model with variaty of alternative proposals. The current production system is analysed by a simulation model emphasizing the bottlenecks and the poorly utilized production line. Our analysis identified some improvements and efficient solutions for the existing system.

  17. Unified Modeling of Discrete Event and Control Systems Applied in Manufacturing

    Directory of Open Access Journals (Sweden)

    Amanda Arêas de Souza

    2015-05-01

    Full Text Available For the development of both a simulation modeland a control system, it is necessary to build, inadvance, a conceptual model. This is what isusually suggested by the methodologies applied inprojects of this nature. Some conceptual modelingtechniques allow for a better understanding ofthe simulation model, and a clear descriptionof the logic of control systems. Therefore, thispaper aims to present and evaluate conceptuallanguages for unified modeling of models ofdiscrete event simulation and control systemsapplied in manufacturing. The results show thatthe IDEF-SIM language can be applied both insimulation systems and in process control.

  18. Discrete event simulation of crop operations in sweet pepper in support of work method innovation

    NARCIS (Netherlands)

    Ooster, van 't Bert; Aantjes, Wiger; Melamed, Z.

    2017-01-01

    Greenhouse Work Simulation, GWorkS, is a model that simulates crop operations in greenhouses for the purpose of analysing work methods. GWorkS is a discrete event model that approaches reality as a discrete stochastic dynamic system. GWorkS was developed and validated using cut-rose as a case

  19. Enhanced Discrete-Time Scheduler Engine for MBMS E-UMTS System Level Simulator

    DEFF Research Database (Denmark)

    Pratas, Nuno; Rodrigues, António

    2007-01-01

    In this paper the design of an E-UMTS system level simulator developed for the study of optimization methods for the MBMS is presented. The simulator uses a discrete event based philosophy, which captures the dynamic behavior of the Radio Network System. This dynamic behavior includes the user...... mobility, radio interfaces and the Radio Access Network. Its given emphasis on the enhancements developed for the simulator core, the Event Scheduler Engine. Two implementations for the Event Scheduler Engine are proposed, one optimized for single core processors and other for multi-core ones....

  20. Event-by-Event Simulation of Induced Fission

    Energy Technology Data Exchange (ETDEWEB)

    Vogt, R; Randrup, J

    2007-12-13

    We are developing a novel code that treats induced fission by statistical (or Monte-Carlo) simulation of individual decay chains. After its initial excitation, the fissionable compound nucleus may either deexcite by evaporation or undergo binary fission into a large number of fission channels each with different energetics involving both energy dissipation and deformed scission prefragments. After separation and Coulomb acceleration, each fission fragment undergoes a succession of individual (neutron) evaporations, leading to two bound but still excited fission products (that may further decay electromagnetically and, ultimately, weakly), as well as typically several neutrons. (The inclusion of other possible ejectiles is planned.) This kind of approach makes it possible to study more detailed observables than could be addressed with previous treatments which have tended to focus on average quantities. In particular, any type of correlation observable can readily be extracted from a generated set of events. With a view towards making the code practically useful in a variety of applications, emphasis is being put on making it numerically efficient so that large event samples can be generated quickly. In its present form, the code can generate one million full events in about 12 seconds on a MacBook laptop computer. The development of this qualitatively new tool is still at an early stage and quantitative reproduction of existing data should not be expected until a number of detailed refinement have been implemented.

  1. Event-by-Event Simulation of Induced Fission

    Science.gov (United States)

    Vogt, Ramona; Randrup, Jørgen

    2008-04-01

    We are developing a novel code that treats induced fission by statistical (or Monte-Carlo) simulation of individual decay chains. After its initial excitation, the fissionable compound nucleus may either de-excite by evaporation or undergo binary fission into a large number of fission channels each with different energetics involving both energy dissipation and deformed scission pre-fragments. After separation and Coulomb acceleration, each fission fragment undergoes a succession of individual (neutron) evaporations, leading to two bound but still excited fission products (that may further decay electromagnetically and, ultimately, weakly), as well as typically several neutrons. (The inclusion of other possible ejectiles is planned.) This kind of approach makes it possible to study more detailed observables than could be addressed with previous treatments which have tended to focus on average quantities. In particular, any type of correlation observable can readily be extracted from a generated set of events. With a view towards making the code practically useful in a variety of applications, emphasis is being put on making it numerically efficient so that large event samples can be generated quickly. In its present form, the code can generate one million full events in about 12 seconds on a MacBook laptop computer. The development of this qualitatively new tool is still at an early stage and quantitative reproduction of existing data should not be expected until a number of detailed refinement have been implemented.

  2. Event-by-Event Simulation of Induced Fission

    International Nuclear Information System (INIS)

    Vogt, Ramona; Randrup, Joergen

    2008-01-01

    We are developing a novel code that treats induced fission by statistical (or Monte-Carlo) simulation of individual decay chains. After its initial excitation, the fissionable compound nucleus may either de-excite by evaporation or undergo binary fission into a large number of fission channels each with different energetics involving both energy dissipation and deformed scission pre-fragments. After separation and Coulomb acceleration, each fission fragment undergoes a succession of individual (neutron) evaporations, leading to two bound but still excited fission products (that may further decay electromagnetically and, ultimately, weakly), as well as typically several neutrons. (The inclusion of other possible ejectiles is planned.) This kind of approach makes it possible to study more detailed observables than could be addressed with previous treatments which have tended to focus on average quantities. In particular, any type of correlation observable can readily be extracted from a generated set of events. With a view towards making the code practically useful in a variety of applications, emphasis is being put on making it numerically efficient so that large event samples can be generated quickly. In its present form, the code can generate one million full events in about 12 seconds on a MacBook laptop computer. The development of this qualitatively new tool is still at an early stage and quantitative reproduction of existing data should not be expected until a number of detailed refinement have been implemented

  3. Event-by-Event Simulation of Induced Fission

    International Nuclear Information System (INIS)

    Vogt, R; Randrup, J

    2007-01-01

    We are developing a novel code that treats induced fission by statistical (or Monte-Carlo) simulation of individual decay chains. After its initial excitation, the fissionable compound nucleus may either deexcite by evaporation or undergo binary fission into a large number of fission channels each with different energetics involving both energy dissipation and deformed scission prefragments. After separation and Coulomb acceleration, each fission fragment undergoes a succession of individual (neutron) evaporations, leading to two bound but still excited fission products (that may further decay electromagnetically and, ultimately, weakly), as well as typically several neutrons. (The inclusion of other possible ejectiles is planned.) This kind of approach makes it possible to study more detailed observables than could be addressed with previous treatments which have tended to focus on average quantities. In particular, any type of correlation observable can readily be extracted from a generated set of events. With a view towards making the code practically useful in a variety of applications, emphasis is being put on making it numerically efficient so that large event samples can be generated quickly. In its present form, the code can generate one million full events in about 12 seconds on a MacBook laptop computer. The development of this qualitatively new tool is still at an early stage and quantitative reproduction of existing data should not be expected until a number of detailed refinement have been implemented

  4. Simulating flaring events in complex active regions driven by observed magnetograms

    Science.gov (United States)

    Dimitropoulou, M.; Isliker, H.; Vlahos, L.; Georgoulis, M. K.

    2011-05-01

    Context. We interpret solar flares as events originating in active regions that have reached the self organized critical state, by using a refined cellular automaton model with initial conditions derived from observations. Aims: We investigate whether the system, with its imposed physical elements, reaches a self organized critical state and whether well-known statistical properties of flares, such as scaling laws observed in the distribution functions of characteristic parameters, are reproduced after this state has been reached. Methods: To investigate whether the distribution functions of total energy, peak energy and event duration follow the expected scaling laws, we first applied a nonlinear force-free extrapolation that reconstructs the three-dimensional magnetic fields from two-dimensional vector magnetograms. We then locate magnetic discontinuities exceeding a threshold in the Laplacian of the magnetic field. These discontinuities are relaxed in local diffusion events, implemented in the form of cellular automaton evolution rules. Subsequent loading and relaxation steps lead the system to self organized criticality, after which the statistical properties of the simulated events are examined. Physical requirements, such as the divergence-free condition for the magnetic field vector, are approximately imposed on all elements of the model. Results: Our results show that self organized criticality is indeed reached when applying specific loading and relaxation rules. Power-law indices obtained from the distribution functions of the modeled flaring events are in good agreement with observations. Single power laws (peak and total flare energy) are obtained, as are power laws with exponential cutoff and double power laws (flare duration). The results are also compared with observational X-ray data from the GOES satellite for our active-region sample. Conclusions: We conclude that well-known statistical properties of flares are reproduced after the system has

  5. Integration of scheduling and discrete event simulation systems to improve production flow planning

    Science.gov (United States)

    Krenczyk, D.; Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.

    2016-08-01

    The increased availability of data and computer-aided technologies such as MRPI/II, ERP and MES system, allowing producers to be more adaptive to market dynamics and to improve production scheduling. Integration of production scheduling and computer modelling, simulation and visualization systems can be useful in the analysis of production system constraints related to the efficiency of manufacturing systems. A integration methodology based on semi-automatic model generation method for eliminating problems associated with complexity of the model and labour-intensive and time-consuming process of simulation model creation is proposed. Data mapping and data transformation techniques for the proposed method have been applied. This approach has been illustrated through examples of practical implementation of the proposed method using KbRS scheduling system and Enterprise Dynamics simulation system.

  6. Corpuscular event-by-event simulation of quantum optics experiments: application to a quantum-controlled delayed-choice experiment

    International Nuclear Information System (INIS)

    De Raedt, Hans; Delina, M; Jin, Fengping; Michielsen, Kristel

    2012-01-01

    A corpuscular simulation model of optical phenomena that does not require knowledge of the solution of a wave equation of the whole system and reproduces the results of Maxwell's theory by generating detection events one by one is discussed. The event-based corpuscular model gives a unified description of multiple-beam fringes of a plane parallel plate and a single-photon Mach-Zehnder interferometer, Wheeler's delayed choice, photon tunneling, quantum eraser, two-beam interference, Einstein-Podolsky-Rosen-Bohm and Hanbury Brown-Twiss experiments. The approach is illustrated by applying it to a recent proposal for a quantum-controlled delayed choice experiment, demonstrating that also this thought experiment can be understood in terms of particle processes only.

  7. Multi Agent System Based Wide Area Protection against Cascading Events

    DEFF Research Database (Denmark)

    Liu, Zhou; Chen, Zhe; Liu, Leo

    2012-01-01

    In this paper, a multi-agent system based wide area protection scheme is proposed in order to prevent long term voltage instability induced cascading events. The distributed relays and controllers work as a device agent which not only executes the normal function automatically but also can...... the effectiveness of proposed protection strategy. The simulation results indicate that the proposed multi agent control system can effectively coordinate the distributed relays and controllers to prevent the long term voltage instability induced cascading events....

  8. Running Parallel Discrete Event Simulators on Sierra

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, P. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jefferson, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-12-03

    In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.

  9. ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE-EVENT SIMULATION

    Science.gov (United States)

    2016-03-24

    ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION...in the United States. AFIT-ENV-MS-16-M-166 ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION...UNLIMITED. AFIT-ENV-MS-16-M-166 ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION Erich W

  10. Manual for the Jet Event and Background Simulation Library(JEBSimLib)

    Energy Technology Data Exchange (ETDEWEB)

    Heinz, Matthias [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Soltz, Ron [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Angerami, Aaron [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-08-29

    Jets are the collimated streams of particles resulting from hard scattering in the initial state of high-energy collisions. In heavy-ion collisions, jets interact with the quark-gluon plasma (QGP) before freezeout, providing a probe into the internal structure and properties of the QGP. In order to study jets, background must be subtracted from the measured event, potentially introducing a bias. We aim to understand and quantify this subtraction bias. PYTHIA, a library to simulate pure jet events, is used to simulate a model for a signature with one pure jet (a photon) and one quenched jet, where all quenched particle momenta are reduced by a user-de ned constant fraction. Background for the event is simulated using multiplicity values generated by the TRENTO initial state model of heavy-ion collisions fed into a thermal model consisting of a 3-dimensional Boltzmann distribution for particle types and momenta. Data from the simulated events is used to train a statistical model, which computes a posterior distribution of the quench factor for a data set. The model was tested rst on pure jet events and then on full events including the background. This model will allow for a quantitative determination of biases induced by various methods of background subtraction.

  11. Simulation and study of small numbers of random events

    Science.gov (United States)

    Shelton, R. D.

    1986-01-01

    Random events were simulated by computer and subjected to various statistical methods to extract important parameters. Various forms of curve fitting were explored, such as least squares, least distance from a line, maximum likelihood. Problems considered were dead time, exponential decay, and spectrum extraction from cosmic ray data using binned data and data from individual events. Computer programs, mostly of an iterative nature, were developed to do these simulations and extractions and are partially listed as appendices. The mathematical basis for the compuer programs is given.

  12. Dynamic information architecture system (DIAS) : multiple model simulation management

    International Nuclear Information System (INIS)

    Simunich, K. L.; Sydelko, P.; Dolph, J.; Christiansen, J.

    2002-01-01

    Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations of a wide variety of application contexts. The modeling domain of a specific DIAS-based simulation is determined by (1) software Entity (domain-specific) objects that represent the real-world entities that comprise the problem space (atmosphere, watershed, human), and (2) simulation models and other data processing applications that express the dynamic behaviors of the domain entities. In DIAS, models communicate only with Entity objects, never with each other. Each Entity object has a number of Parameter and Aspect (of behavior) objects associated with it. The Parameter objects contain the state properties of the Entity object. The Aspect objects represent the behaviors of the Entity object and how it interacts with other objects. DIAS extends the ''Object'' paradigm by abstraction of the object's dynamic behaviors, separating the ''WHAT'' from the ''HOW.'' DIAS object class definitions contain an abstract description of the various aspects of the object's behavior (the WHAT), but no implementation details (the HOW). Separate DIAS models/applications carry the implementation of object behaviors (the HOW). Any model deemed appropriate, including existing legacy-type models written in other languages, can drive entity object behavior. The DIAS design promotes plug-and-play of alternative models, with minimal recoding of existing applications. The DIAS Context Builder object builds a constructs or scenario for the simulation, based on developer specification and user inputs. Because DIAS is a discrete event simulation system, there is a Simulation Manager object with which all events are processed. Any class that registers to receive events must implement an event handler (method) to process the event during execution. Event handlers can schedule other events; create or remove Entities from the

  13. Event Index - a LHCb Event Search System

    CERN Document Server

    INSPIRE-00392208; Kazeev, Nikita; Redkin, Artem

    2015-12-23

    LHC experiments generate up to $10^{12}$ events per year. This paper describes Event Index - an event search system. Event Index's primary function is quickly selecting subsets of events from a combination of conditions, such as the estimated decay channel or stripping lines output. Event Index is essentially Apache Lucene optimized for read-only indexes distributed over independent shards on independent nodes.

  14. Design and simulation for real-time distributed processing systems

    International Nuclear Information System (INIS)

    Legrand, I.C.; Gellrich, A.; Gensah, U.; Leich, H.; Wegner, P.

    1996-01-01

    The aim of this work is to provide a proper framework for the simulation and the optimization of the event building, the on-line third level trigger, and complete event reconstruction processor farm for the future HERA-B experiment. A discrete event, process oriented, simulation developed in concurrent μC++ is used for modelling the farm nodes running with multi-tasking constraints and different types of switching elements and digital signal processors interconnected for distributing the data through the system. An adequate graphic interface to the simulation part which allows to monitor features on-line and to analyze trace files, provides a powerful development tool for evaluating and designing parallel processing architectures. Control software and data flow protocols for event building and dynamic processor allocation are presented for two architectural models. (author)

  15. Simulation of Flash-Flood-Producing Storm Events in Saudi Arabia Using the Weather Research and Forecasting Model

    KAUST Repository

    Deng, Liping

    2015-05-01

    The challenges of monitoring and forecasting flash-flood-producing storm events in data-sparse and arid regions are explored using the Weather Research and Forecasting (WRF) Model (version 3.5) in conjunction with a range of available satellite, in situ, and reanalysis data. Here, we focus on characterizing the initial synoptic features and examining the impact of model parameterization and resolution on the reproduction of a number of flood-producing rainfall events that occurred over the western Saudi Arabian city of Jeddah. Analysis from the European Centre for Medium-Range Weather Forecasts (ECMWF) interim reanalysis (ERA-Interim) data suggests that mesoscale convective systems associated with strong moisture convergence ahead of a trough were the major initial features for the occurrence of these intense rain events. The WRF Model was able to simulate the heavy rainfall, with driving convective processes well characterized by a high-resolution cloud-resolving model. The use of higher (1 km vs 5 km) resolution along the Jeddah coastline favors the simulation of local convective systems and adds value to the simulation of heavy rainfall, especially for deep-convection-related extreme values. At the 5-km resolution, corresponding to an intermediate study domain, simulation without a cumulus scheme led to the formation of deeper convective systems and enhanced rainfall around Jeddah, illustrating the need for careful model scheme selection in this transition resolution. In analysis of multiple nested WRF simulations (25, 5, and 1 km), localized volume and intensity of heavy rainfall together with the duration of rainstorms within the Jeddah catchment area were captured reasonably well, although there was evidence of some displacements of rainstorm events.

  16. Event-based scenario manager for multibody dynamics simulation of heavy load lifting operations in shipyards

    Directory of Open Access Journals (Sweden)

    Sol Ha

    2016-01-01

    Full Text Available This paper suggests an event-based scenario manager capable of creating and editing a scenario for shipbuilding process simulation based on multibody dynamics. To configure various situation in shipyards and easily connect with multibody dynamics, the proposed method has two main concepts: an Actor and an Action List. The Actor represents the anatomic unit of action in the multibody dynamics and can be connected to a specific component of the dynamics kernel such as the body and joint. The user can make a scenario up by combining the actors. The Action List contains information for arranging and executing the actors. Since the shipbuilding process is a kind of event-based sequence, all simulation models were configured using Discrete EVent System Specification (DEVS formalism. The proposed method was applied to simulations of various operations in shipyards such as lifting and erection of a block and heavy load lifting operation using multiple cranes.

  17. Evaluation and simulation of event building techniques for a detector at the LHC

    CERN Document Server

    Spiwoks, R

    1995-01-01

    The main objectives of future experiments at the Large Hadron Collider are the search for the Higgs boson (or bosons), the verification of the Standard Model and the search beyond the Standard Model in a new energy range up to a few TeV. These experiments will have to cope with unprecedented high data rates and will need event building systems which can offer a bandwidth of 1 to 100GB/s and which can assemble events from 100 to 1000 readout memories at rates of 1 to 100kHz. This work investigates the feasibility of parallel event building sys- tems using commercially available high speed interconnects and switches. Studies are performed by building a small-scale prototype and by modelling this proto- type and realistic architectures with discrete-event simulations. The prototype is based on the HiPPI standard and uses commercially available VME-HiPPI interfaces and a HiPPI switch together with modular and scalable software. The setup operates successfully as a parallel event building system of limited size in...

  18. Transient simulation for a real-time operator advisor expert system

    International Nuclear Information System (INIS)

    Jakubowski, T.; Hajek, B.K.; Miller, D.W.; Bhatnagar, R.

    1990-01-01

    An Operator Advisor (OA) consisting of four integrated expert systems has been under development at The Ohio State University since 1985. The OA, designed for a General Electric BWR-6 plant, has used the Perry Nuclear Power Plants full scope simulator near Cleveland, Ohio (USA) as the reference plant. The primary goal of this development has been to provide a single system which not only performs monitoring and diagnosis functions, but also provides fault mitigation procedures to the operator, monitors the performance of these procedures and provides backup procudures should the initial ones fail. To test the system off line, a transient event simulation methodology has been developed. The simulator employs event scenarios from the Perry simulator. Scenarios are selected to test both inter- and intra-modular system behavior and response and to verify the consistency and accuracy of the knowledge base. This paper describes the OA architecture and design and the transient simulation. A discussion of a sample scenario test, including the rationale for scenario selection, is included as an example. The results of the testing demonstrate the value of off line transient simulations in that they verify system operation and help to identify characteristics requiring further improvements

  19. Simulation of quantum computation : A deterministic event-based approach

    NARCIS (Netherlands)

    Michielsen, K; De Raedt, K; De Raedt, H

    We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and

  20. Simulation of Quantum Computation : A Deterministic Event-Based Approach

    NARCIS (Netherlands)

    Michielsen, K.; Raedt, K. De; Raedt, H. De

    2005-01-01

    We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and

  1. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    Science.gov (United States)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  2. A Madden-Julian oscillation event realistically simulated by a global cloud-resolving model.

    Science.gov (United States)

    Miura, Hiroaki; Satoh, Masaki; Nasuno, Tomoe; Noda, Akira T; Oouchi, Kazuyoshi

    2007-12-14

    A Madden-Julian Oscillation (MJO) is a massive weather event consisting of deep convection coupled with atmospheric circulation, moving slowly eastward over the Indian and Pacific Oceans. Despite its enormous influence on many weather and climate systems worldwide, it has proven very difficult to simulate an MJO because of assumptions about cumulus clouds in global meteorological models. Using a model that allows direct coupling of the atmospheric circulation and clouds, we successfully simulated the slow eastward migration of an MJO event. Topography, the zonal sea surface temperature gradient, and interplay between eastward- and westward-propagating signals controlled the timing of the eastward transition of the convective center. Our results demonstrate the potential making of month-long MJO predictions when global cloud-resolving models with realistic initial conditions are used.

  3. Synchronized Phasor Measurements of a Power System Event in Eastern Denmark

    DEFF Research Database (Denmark)

    Rasmussen, Joana; Jørgensen, Preben

    2003-01-01

    . The outage of the 400-kV tie-line weakened the Eastern Danish power system and excited power oscillations in the interconnected power systems. During this event prototype Phasor Measurements Units (PMU) gave the opportunity of realtime monitoring of positive sequence voltage and current phasors using...... satellite-based Global Positioning System (GPS). Comparisons between real-time recordings and results from dynamic simulations with PSS/E are presented. The main features from the simulation analysis are successfully verified by means of the corresponding synchronized phasor measurements....

  4. Synchronized Phasor Measurements of a Power System Event in Eastern Denmark

    DEFF Research Database (Denmark)

    Rasmussen, Joana; Jørgensen, Preben

    2006-01-01

    . The outage of the 400-kV tie-line weakened the Eastern Danish power system and excited power oscillations in the interconnected power systems. During this event prototype Phasor Measurements Units (PMU) gave the opportunity of realtime monitoring of positive sequence voltage and current phasors using...... satellite-based Global Positioning System (GPS). Comparisons between real-time recordings and results from dynamic simulations with PSS/E are presented. The main features from the simulation analysis are successfully verified by means of the corresponding synchronized phasor measurements....

  5. Simulation of Electrical Grid with Omnet++ Open Source Discrete Event System Simulator

    Directory of Open Access Journals (Sweden)

    Sőrés Milán

    2016-12-01

    Full Text Available The simulation of electrical networks is very important before development and servicing of electrical networks and grids can occur. There are software that can simulate the behaviour of electrical grids under different operating conditions, but these simulation environments cannot be used in a single cloud-based project, because they are not GNU-licensed software products. In this paper, an integrated framework was proposed that models and simulates communication networks. The design and operation of the simulation environment are investigated and a model of electrical components is proposed. After simulation, the simulation results were compared to manual computed results.

  6. The global event system

    International Nuclear Information System (INIS)

    Winans, J.

    1994-01-01

    The support for the global event system has been designed to allow an application developer to control the APS event generator and receiver boards. This is done by the use of four new record types. These records are customized and are only supported by the device support modules for the APS event generator and receiver boards. The use of the global event system and its associated records should not be confused with the vanilla EPICS events and the associated event records. They are very different

  7. Dynamic information architecture system (DIAS) : multiple model simulation management.

    Energy Technology Data Exchange (ETDEWEB)

    Simunich, K. L.; Sydelko, P.; Dolph, J.; Christiansen, J.

    2002-05-13

    Dynamic Information Architecture System (DIAS) is a flexible, extensible, object-based framework for developing and maintaining complex multidisciplinary simulations of a wide variety of application contexts. The modeling domain of a specific DIAS-based simulation is determined by (1) software Entity (domain-specific) objects that represent the real-world entities that comprise the problem space (atmosphere, watershed, human), and (2) simulation models and other data processing applications that express the dynamic behaviors of the domain entities. In DIAS, models communicate only with Entity objects, never with each other. Each Entity object has a number of Parameter and Aspect (of behavior) objects associated with it. The Parameter objects contain the state properties of the Entity object. The Aspect objects represent the behaviors of the Entity object and how it interacts with other objects. DIAS extends the ''Object'' paradigm by abstraction of the object's dynamic behaviors, separating the ''WHAT'' from the ''HOW.'' DIAS object class definitions contain an abstract description of the various aspects of the object's behavior (the WHAT), but no implementation details (the HOW). Separate DIAS models/applications carry the implementation of object behaviors (the HOW). Any model deemed appropriate, including existing legacy-type models written in other languages, can drive entity object behavior. The DIAS design promotes plug-and-play of alternative models, with minimal recoding of existing applications. The DIAS Context Builder object builds a constructs or scenario for the simulation, based on developer specification and user inputs. Because DIAS is a discrete event simulation system, there is a Simulation Manager object with which all events are processed. Any class that registers to receive events must implement an event handler (method) to process the event during execution. Event handlers

  8. MHD simulation of the Bastille day event

    Energy Technology Data Exchange (ETDEWEB)

    Linker, Jon, E-mail: linkerj@predsci.com; Torok, Tibor; Downs, Cooper; Lionello, Roberto; Titov, Viacheslav; Caplan, Ronald M.; Mikić, Zoran; Riley, Pete [Predictive Science Inc., 9990 Mesa Rim Road, Suite 170, San Diego CA, USA 92121 (United States)

    2016-03-25

    We describe a time-dependent, thermodynamic, three-dimensional MHD simulation of the July 14, 2000 coronal mass ejection (CME) and flare. The simulation starts with a background corona developed using an MDI-derived magnetic map for the boundary condition. Flux ropes using the modified Titov-Demoulin (TDm) model are used to energize the pre-event active region, which is then destabilized by photospheric flows that cancel flux near the polarity inversion line. More than 10{sup 33} ergs are impulsively released in the simulated eruption, driving a CME at 1500 km/s, close to the observed speed of 1700km/s. The post-flare emission in the simulation is morphologically similar to the observed post-flare loops. The resulting flux rope that propagates to 1 AU is similar in character to the flux rope observed at 1 AU, but the simulated ICME center passes 15° north of Earth.

  9. QUALITY THROUGH INTEGRATION OF PRODUCTION AND SHOP FLOOR MANAGEMENT BY DISCRETE EVENT SIMULATION

    Directory of Open Access Journals (Sweden)

    Zoran Mirović

    2007-06-01

    Full Text Available With the intention to integrate strategic and tactical decision making and develop the capability of plans and schedules reconfiguration and synchronization in a very short cycle time many firms have proceeded to the adoption of ERP and Advanced Planning and Scheduling (APS technologies. The final goal is a purposeful scheduling system that guide in the right direction the current, high priority needs of the shop floor while remaining consistent with long-term production plans. The difference, and the power, of Discrete-Event Simulation (DES is its ability to mimic dynamic manufacturing systems, consisting of complex structures, and many heterogeneous interacting components. This paper describes such an integrated system (ERP/APS/DES and draw attention to the essential role of simulation based scheduling within it.

  10. Advanced Reactor Passive System Reliability Demonstration Analysis for an External Event

    Directory of Open Access Journals (Sweden)

    Matthew Bucknor

    2017-03-01

    Full Text Available Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general for the postulated transient event.

  11. Advanced reactor passive system reliability demonstration analysis for an external event

    Energy Technology Data Exchange (ETDEWEB)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia J.; Grelle, Austin [Argonne National Laboratory, Argonne (United States)

    2017-03-15

    Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general) for the postulated transient event.

  12. Advanced reactor passive system reliability demonstration analysis for an external event

    International Nuclear Information System (INIS)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia J.; Grelle, Austin

    2017-01-01

    Many advanced reactor designs rely on passive systems to fulfill safety functions during accident sequences. These systems depend heavily on boundary conditions to induce a motive force, meaning the system can fail to operate as intended because of deviations in boundary conditions, rather than as the result of physical failures. Furthermore, passive systems may operate in intermediate or degraded modes. These factors make passive system operation difficult to characterize within a traditional probabilistic framework that only recognizes discrete operating modes and does not allow for the explicit consideration of time-dependent boundary conditions. Argonne National Laboratory has been examining various methodologies for assessing passive system reliability within a probabilistic risk assessment for a station blackout event at an advanced small modular reactor. This paper provides an overview of a passive system reliability demonstration analysis for an external event. Considering an earthquake with the possibility of site flooding, the analysis focuses on the behavior of the passive Reactor Cavity Cooling System following potential physical damage and system flooding. The assessment approach seeks to combine mechanistic and simulation-based methods to leverage the benefits of the simulation-based approach without the need to substantially deviate from conventional probabilistic risk assessment techniques. Although this study is presented as only an example analysis, the results appear to demonstrate a high level of reliability of the Reactor Cavity Cooling System (and the reactor system in general) for the postulated transient event

  13. Non-Lipschitz Dynamics Approach to Discrete Event Systems

    Science.gov (United States)

    Zak, M.; Meyers, R.

    1995-01-01

    This paper presents and discusses a mathematical formalism for simulation of discrete event dynamics (DED) - a special type of 'man- made' system designed to aid specific areas of information processing. A main objective is to demonstrate that the mathematical formalism for DED can be based upon the terminal model of Newtonian dynamics which allows one to relax Lipschitz conditions at some discrete points.

  14. Rare event simulation in radiation transport

    International Nuclear Information System (INIS)

    Kollman, C.

    1993-10-01

    This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ''learning'' algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution

  15. Using Discrete Event Simulation to Model Integrated Commodities Consumption for a Launch Campaign of the Space Launch System

    Science.gov (United States)

    Leonard, Daniel; Parsons, Jeremy W.; Cates, Grant

    2014-01-01

    In May 2013, NASA's GSDO Program requested a study to develop a discrete event simulation (DES) model that analyzes the launch campaign process of the Space Launch System (SLS) from an integrated commodities perspective. The scope of the study includes launch countdown and scrub turnaround and focuses on four core launch commodities: hydrogen, oxygen, nitrogen, and helium. Previously, the commodities were only analyzed individually and deterministically for their launch support capability, but this study was the first to integrate them to examine the impact of their interactions on a launch campaign as well as the effects of process variability on commodity availability. The study produced a validated DES model with Rockwell Arena that showed that Kennedy Space Center's ground systems were capable of supporting a 48-hour scrub turnaround for the SLS. The model will be maintained and updated to provide commodity consumption analysis of future ground system and SLS configurations.

  16. DEVS representation of dynamical systems - Event-based intelligent control. [Discrete Event System Specification

    Science.gov (United States)

    Zeigler, Bernard P.

    1989-01-01

    It is shown how systems can be advantageously represented as discrete-event models by using DEVS (discrete-event system specification), a set-theoretic formalism. Such DEVS models provide a basis for the design of event-based logic control. In this control paradigm, the controller expects to receive confirming sensor responses to its control commands within definite time windows determined by its DEVS model of the system under control. The event-based contral paradigm is applied in advanced robotic and intelligent automation, showing how classical process control can be readily interfaced with rule-based symbolic reasoning systems.

  17. System modeling and simulation at EBR-II

    International Nuclear Information System (INIS)

    Dean, E.M.; Lehto, W.K.; Larson, H.A.

    1986-01-01

    The codes being developed and verified using EBR-II data are the NATDEMO, DSNP and CSYRED. NATDEMO is a variation of the Westinghouse DEMO code coupled to the NATCON code previously used to simulate perturbations of reactor flow and inlet temperature and loss-of-flow transients leading to natural convection in EBR-II. CSYRED uses the Continuous System Modeling Program (CSMP) to simulate the EBR-II core, including power, temperature, control-rod movement reactivity effects and flow and is used primarily to model reactivity induced power transients. The Dynamic Simulator for Nuclear Power Plants (DSNP) allows a whole plant, thermal-hydraulic simulation using specific component and system models called from libraries. It has been used to simulate flow coastdown transients, reactivity insertion events and balance-of-plant perturbations

  18. Simulation of Random Events for Air Traffic Applications

    Directory of Open Access Journals (Sweden)

    Stéphane Puechmorel

    2018-05-01

    Full Text Available Resilience to uncertainties must be ensured in air traffic management. Unexpected events can either be disruptive, like thunderstorms or the famous volcano ash cloud resulting from the Eyjafjallajökull eruption in Iceland, or simply due to imprecise measurements or incomplete knowledge of the environment. While human operators are able to cope with such situations, it is generally not the case for automated decision support tools. Important examples originate from the numerous attempts made to design algorithms able to solve conflicts between aircraft occurring during flights. The STARGATE (STochastic AppRoach for naviGATion functions in uncertain Environment project was initiated in order to study the feasibility of inherently robust automated planning algorithms that will not fail when submitted to random perturbations. A mandatory first step is the ability to simulate the usual stochastic phenomenons impairing the system: delays due to airport platforms or air traffic control (ATC and uncertainties on the wind velocity. The work presented here will detail algorithms suitable for the simulation task.

  19. Wire chamber requirements and tracking simulation studies for tracking systems at the superconducting super collider

    International Nuclear Information System (INIS)

    Hanson, G.G.; Niczyporuk, B.B.; Palounek, A.P.T.

    1989-02-01

    Limitations placed on wire chambers by radiation damage and rate requirements in the SSC environment are reviewed. Possible conceptual designs for wire chamber tracking systems which meet these requirements are discussed. Computer simulation studies of tracking in such systems are presented. Simulations of events from interesting physics at the SSC, including hits from minimum bias background events, are examined. Results of some preliminary pattern recognition studies are given. Such computer simulation studies are necessary to determine the feasibility of wire chamber tracking systems for complex events in a high-rate environment such as the SSC. 11 refs., 9 figs., 1 tab

  20. Human errors during the simulations of an SGTR scenario: Application of the HERA system

    International Nuclear Information System (INIS)

    Jung, Won Dea; Whaley, April M.; Hallbert, Bruce P.

    2009-01-01

    Due to the need of data for a Human Reliability Analysis (HRA), a number of data collection efforts have been undertaken in several different organizations. As a part of this effort, a human error analysis that focused on a set of simulator records on a Steam Generator Tube Rupture (SGTR) scenario was performed by using the Human Event Repository and Analysis (HERA) system. This paper summarizes the process and results of the HERA analysis, including discussions about the usability of the HERA system for a human error analysis of simulator data. Five simulated records of an SGTR scenario were analyzed with the HERA analysis process in order to scrutinize the causes and mechanisms of the human related events. From this study, the authors confirmed that the HERA was a serviceable system that can analyze human performance qualitatively from simulator data. It was possible to identify the human related events in the simulator data that affected the system safety not only negatively but also positively. It was also possible to scrutinize the Performance Shaping Factors (PSFs) and the relevant contributory factors with regard to each identified human event

  1. Application of discrete event simulation to MRS design

    International Nuclear Information System (INIS)

    Bali, M.; Standley, W.

    1993-01-01

    The application of discrete event simulation to the Monitored, Retrievable Storage (MRS) material handling operations supported the MRS conceptual design effort and established a set of tools for use during MRS detail design and license application. The effort to develop a design analysis tool to support the MRS project started in 1991. The MRS simulation has so far identified potential savings and suggested methods of improving operations to enhance throughput. Immediately, simulation aided the MRS conceptual design effort through the investigation of alternative cask handling operations and the sizing and sharing of expensive equipment. The simulation also helped analyze the operability of the current design of MRS under various waste acceptance scenarios. Throughout the simulation effort, the model development and experimentation resulted in early identification and resolution of several design and operational issues

  2. Using system dynamics simulation for assessment of hydropower system safety

    Science.gov (United States)

    King, L. M.; Simonovic, S. P.; Hartford, D. N. D.

    2017-08-01

    Hydropower infrastructure systems are complex, high consequence structures which must be operated safely to avoid catastrophic impacts to human life, the environment, and the economy. Dam safety practitioners must have an in-depth understanding of how these systems function under various operating conditions in order to ensure the appropriate measures are taken to reduce system vulnerability. Simulation of system operating conditions allows modelers to investigate system performance from the beginning of an undesirable event to full system recovery. System dynamics simulation facilitates the modeling of dynamic interactions among complex arrangements of system components, providing outputs of system performance that can be used to quantify safety. This paper presents the framework for a modeling approach that can be used to simulate a range of potential operating conditions for a hydropower infrastructure system. Details of the generic hydropower infrastructure system simulation model are provided. A case study is used to evaluate system outcomes in response to a particular earthquake scenario, with two system safety performance measures shown. Results indicate that the simulation model is able to estimate potential measures of system safety which relate to flow conveyance and flow retention. A comparison of operational and upgrade strategies is shown to demonstrate the utility of the model for comparing various operational response strategies, capital upgrade alternatives, and maintenance regimes. Results show that seismic upgrades to the spillway gates provide the largest improvement in system performance for the system and scenario of interest.

  3. Discrete events simulation of a route with traffic lights through automated control in real time

    Directory of Open Access Journals (Sweden)

    Rodrigo César Teixeira Baptista

    2013-03-01

    Full Text Available This paper presents the integration and communication in real-time of a discrete event simulation model with an automatic control system. The simulation model of an intersection with roads having traffic lights was built in the Arena environment. The integration and communication have been made via network, and the control system was operated by a programmable logic controller. Scenarios were simulated for the free, regular and congested traffic situations. The results showed the average number of vehicles that entered in the system and that were retained and also the total average time of the crossing of the vehicles on the road. In general, the model allowed evaluating the behavior of the traffic in each of the ways and the commands from the controller to activation and deactivation of the traffic lights.

  4. A model management system for combat simulation

    OpenAIRE

    Dolk, Daniel R.

    1986-01-01

    The design and implementation of a model management system to support combat modeling is discussed. Structured modeling is introduced as a formalism for representing mathematical models. A relational information resource dictionary system is developed which can accommodate structured models. An implementation is described. Structured modeling is then compared to Jackson System Development (JSD) as a methodology for facilitating discrete event simulation. JSD is currently better at representin...

  5. The use of discrete-event simulation modelling to improve radiation therapy planning processes.

    Science.gov (United States)

    Werker, Greg; Sauré, Antoine; French, John; Shechter, Steven

    2009-07-01

    The planning portion of the radiation therapy treatment process at the British Columbia Cancer Agency is efficient but nevertheless contains room for improvement. The purpose of this study is to show how a discrete-event simulation (DES) model can be used to represent this complex process and to suggest improvements that may reduce the planning time and ultimately reduce overall waiting times. A simulation model of the radiation therapy (RT) planning process was constructed using the Arena simulation software, representing the complexities of the system. Several types of inputs feed into the model; these inputs come from historical data, a staff survey, and interviews with planners. The simulation model was validated against historical data and then used to test various scenarios to identify and quantify potential improvements to the RT planning process. Simulation modelling is an attractive tool for describing complex systems, and can be used to identify improvements to the processes involved. It is possible to use this technique in the area of radiation therapy planning with the intent of reducing process times and subsequent delays for patient treatment. In this particular system, reducing the variability and length of oncologist-related delays contributes most to improving the planning time.

  6. Discrete event simulation: Modeling simultaneous complications and outcomes

    NARCIS (Netherlands)

    Quik, E.H.; Feenstra, T.L.; Krabbe, P.F.M.

    2012-01-01

    OBJECTIVES: To present an effective and elegant model approach to deal with specific characteristics of complex modeling. METHODS: A discrete event simulation (DES) model with multiple complications and multiple outcomes that each can occur simultaneously was developed. In this DES model parameters,

  7. Concept of operator support system based on cognitive simulation

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Takano, Kenichi

    1999-01-01

    Hazardous technologies such chemical plants, nuclear power plants, etc. have introduced multi-layered defenses to prevent accidents. One of those defenses is experienced operators in control rooms. Once an abnormal condition occurs, they are the front line people to cope with it. Therefore, operators' quick recognition of the plant conditions and fast decision making on responses are quite important for trouble shooting. In order to help operators to deal with abnormalities in process plants, lots of efforts had been done to develop operator support systems since early 1980s (IAEA, 1993). However, the boom in developing operator support systems has slumped due to the limitations of knowledge engineering, artificial knowledge, etc (Yamamoto, 1998). The limitations had also biased the focus of the system development to abnormality detection, root cause diagnosis, etc (Hajek, Hashemi, Sharma and Chandrasekaran, 1986). Information or guidance about future plant behavior and strategies/tactics to deal with abnormal events are important and helpful for operators but researches and development of those systems made a belated start. Before developing these kinds of system, it is essential to understand how operators deal with abnormalities. CRIEPI has been conducting a project to develop a computer system that simulates behavior of operators dealing with abnormal operating conditions in a nuclear power plant. This project had two stages. In the first stage, the authors developed a prototype system that simulates behavior of a team facing abnormal events in a very simplified power plant (Sasou, Takano and Yoshimura, 1995). In the second stage, the authors applied the simulation technique developed in the first stage to construct a system to simulate a team's behavior in a nuclear power plant. This paper briefly summarizes the simulation system developed in the second stage, main mechanism for the simulation and the concept of an operator support system based on this

  8. Developing a discrete event simulation model for university student shuttle buses

    Science.gov (United States)

    Zulkepli, Jafri; Khalid, Ruzelan; Nawawi, Mohd Kamal Mohd; Hamid, Muhammad Hafizan

    2017-11-01

    Providing shuttle buses for university students to attend their classes is crucial, especially when their number is large and the distances between their classes and residential halls are far. These factors, in addition to the non-optimal current bus services, typically require the students to wait longer which eventually opens a space for them to complain. To considerably reduce the waiting time, providing the optimal number of buses to transport them from location to location and the effective route schedules to fulfil the students' demand at relevant time ranges are thus important. The optimal bus number and schedules are to be determined and tested using a flexible decision platform. This paper thus models the current services of student shuttle buses in a university using a Discrete Event Simulation approach. The model can flexibly simulate whatever changes configured to the current system and report its effects to the performance measures. How the model was conceptualized and formulated for future system configurations are the main interest of this paper.

  9. Integral-based event triggering controller design for stochastic LTI systems via convex optimisation

    Science.gov (United States)

    Mousavi, S. H.; Marquez, H. J.

    2016-07-01

    The presence of measurement noise in the event-based systems can lower system efficiency both in terms of data exchange rate and performance. In this paper, an integral-based event triggering control system is proposed for LTI systems with stochastic measurement noise. We show that the new mechanism is robust against noise and effectively reduces the flow of communication between plant and controller, and also improves output performance. Using a Lyapunov approach, stability in the mean square sense is proved. A simulated example illustrates the properties of our approach.

  10. Vaporization studies of plasma interactive materials in simulated plasma disruption events

    International Nuclear Information System (INIS)

    Stone, C.A. IV; Croessmann, C.D.; Whitley, J.B.

    1988-03-01

    The melting and vaporization that occur when plasma facing materials are subjected to a plasma disruption will severely limit component lifetime and plasma performance. A series of high heat flux experiments was performed on a group of fusion reactor candidate materials to model material erosion which occurs during plasma disruption events. The Electron Beam Test System was used to simulate single disruption and multiple disruption phenomena. Samples of aluminum, nickel, copper, molybdenum, and 304 stainless steel were subjected to a variety of heat loads, ranging from 100 to 400 msec pulses of 8 to 18 kWcm 2 . It was found that the initial surface temperature of a material strongly influences the vaporization process and that multiple disruptions do not scale linearly with respect to single disruption events. 2 refs., 9 figs., 5 tabs

  11. Coupled atmosphere-ocean-wave simulations of a storm event over the Gulf of Lion and Balearic Sea

    Science.gov (United States)

    Renault, Lionel; Chiggiato, Jacopo; Warner, John C.; Gomez, Marta; Vizoso, Guillermo; Tintore, Joaquin

    2012-01-01

    The coastal areas of the North-Western Mediterranean Sea are one of the most challenging places for ocean forecasting. This region is exposed to severe storms events that are of short duration. During these events, significant air-sea interactions, strong winds and large sea-state can have catastrophic consequences in the coastal areas. To investigate these air-sea interactions and the oceanic response to such events, we implemented the Coupled Ocean-Atmosphere-Wave-Sediment Transport Modeling System simulating a severe storm in the Mediterranean Sea that occurred in May 2010. During this event, wind speed reached up to 25 m.s-1 inducing significant sea surface cooling (up to 2°C) over the Gulf of Lion (GoL) and along the storm track, and generating surface waves with a significant height of 6 m. It is shown that the event, associated with a cyclogenesis between the Balearic Islands and the GoL, is relatively well reproduced by the coupled system. A surface heat budget analysis showed that ocean vertical mixing was a major contributor to the cooling tendency along the storm track and in the GoL where turbulent heat fluxes also played an important role. Sensitivity experiments on the ocean-atmosphere coupling suggested that the coupled system is sensitive to the momentum flux parameterization as well as air-sea and air-wave coupling. Comparisons with available atmospheric and oceanic observations showed that the use of the fully coupled system provides the most skillful simulation, illustrating the benefit of using a fully coupled ocean-atmosphere-wave model for the assessment of these storm events.

  12. NEVESIM: event-driven neural simulation framework with a Python interface.

    Science.gov (United States)

    Pecevski, Dejan; Kappel, David; Jonke, Zeno

    2014-01-01

    NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies.

  13. Distributed event-triggered consensus tracking of second-order multi-agent systems with a virtual leader

    International Nuclear Information System (INIS)

    Cao Jie; Wu Zhi-Hai; Peng Li

    2016-01-01

    This paper investigates the consensus tracking problems of second-order multi-agent systems with a virtual leader via event-triggered control. A novel distributed event-triggered transmission scheme is proposed, which is intermittently examined at constant sampling instants. Only partial neighbor information and local measurements are required for event detection. Then the corresponding event-triggered consensus tracking protocol is presented to guarantee second-order multi-agent systems to achieve consensus tracking. Numerical simulations are given to illustrate the effectiveness of the proposed strategy. (paper)

  14. A verilog simulation of the CDF DAQ system

    Energy Technology Data Exchange (ETDEWEB)

    Schurecht, K.; Harris, R. (Fermi National Accelerator Lab., Batavia, IL (United States)); Sinervo, P.; Grindley, R. (Toronto Univ., ON (Canada). Dept. of Physics)

    1991-11-01

    A behavioral simulation of the CDF data acquisition system was written in the Verilog modeling language in order to investigate the effects of various improvements to the existing system. This system is modeled as five separate components that communicate with each other via Fastbus interrupt messages. One component of the system, the CDF event builder, is modeled in substantially greater detail due to its complex structure. This simulation has been verified by comparing its performance with that of the existing DAQ system. Possible improvements to the existing systems were studied using the simulation, and the optimal upgrade path for the system was chosen on the basis of these studies. The overall throughput of the modified system is estimated to be double that of the existing setup. Details of this modeling effort will be discussed, including a comparison of the modeled and actual performance of the existing system.

  15. A verilog simulation of the CDF DAQ system

    International Nuclear Information System (INIS)

    Schurecht, K.; Harris, R.; Sinervo, P.; Grindley, R.

    1991-11-01

    A behavioral simulation of the CDF data acquisition system was written in the Verilog modeling language in order to investigate the effects of various improvements to the existing system. This system is modeled as five separate components that communicate with each other via Fastbus interrupt messages. One component of the system, the CDF event builder, is modeled in substantially greater detail due to its complex structure. This simulation has been verified by comparing its performance with that of the existing DAQ system. Possible improvements to the existing systems were studied using the simulation, and the optimal upgrade path for the system was chosen on the basis of these studies. The overall throughput of the modified system is estimated to be double that of the existing setup. Details of this modeling effort will be discussed, including a comparison of the modeled and actual performance of the existing system

  16. Modeling extreme "Carrington-type" space weather events using three-dimensional global MHD simulations

    Science.gov (United States)

    Ngwira, Chigomezyo M.; Pulkkinen, Antti; Kuznetsova, Maria M.; Glocer, Alex

    2014-06-01

    There is a growing concern over possible severe societal consequences related to adverse space weather impacts on man-made technological infrastructure. In the last two decades, significant progress has been made toward the first-principles modeling of space weather events, and three-dimensional (3-D) global magnetohydrodynamics (MHD) models have been at the forefront of this transition, thereby playing a critical role in advancing our understanding of space weather. However, the modeling of extreme space weather events is still a major challenge even for the modern global MHD models. In this study, we introduce a specially adapted University of Michigan 3-D global MHD model for simulating extreme space weather events with a Dst footprint comparable to the Carrington superstorm of September 1859 based on the estimate by Tsurutani et. al. (2003). Results are presented for a simulation run with "very extreme" constructed/idealized solar wind boundary conditions driving the magnetosphere. In particular, we describe the reaction of the magnetosphere-ionosphere system and the associated induced geoelectric field on the ground to such extreme driving conditions. The model setup is further tested using input data for an observed space weather event of Halloween storm October 2003 to verify the MHD model consistence and to draw additional guidance for future work. This extreme space weather MHD model setup is designed specifically for practical application to the modeling of extreme geomagnetically induced electric fields, which can drive large currents in ground-based conductor systems such as power transmission grids. Therefore, our ultimate goal is to explore the level of geoelectric fields that can be induced from an assumed storm of the reported magnitude, i.e., Dst˜=-1600 nT.

  17. Discrete-event simulation of nuclear-waste transport in geologic sites subject to disruptive events. Final report

    International Nuclear Information System (INIS)

    Aggarwal, S.; Ryland, S.; Peck, R.

    1980-01-01

    This report outlines a methodology to study the effects of disruptive events on nuclear waste material in stable geologic sites. The methodology is based upon developing a discrete events model that can be simulated on the computer. This methodology allows a natural development of simulation models that use computer resources in an efficient manner. Accurate modeling in this area depends in large part upon accurate modeling of ion transport behavior in the storage media. Unfortunately, developments in this area are not at a stage where there is any consensus on proper models for such transport. Consequently, our work is directed primarily towards showing how disruptive events can be properly incorporated in such a model, rather than as a predictive tool at this stage. When and if proper geologic parameters can be determined, then it would be possible to use this as a predictive model. Assumptions and their bases are discussed, and the mathematical and computer model are described

  18. Discrete simulation system based on artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Futo, I; Szeredi, J

    1982-01-01

    A discrete event simulation system based on the AI language Prolog is presented. The system called t-Prolog extends the traditional possibilities of simulation languages toward automatic problem solving by using backtrack in time and automatic model modification depending on logical deductions. As t-Prolog is an interactive tool, the user has the possibility to interrupt the simulation run to modify the model or to force it to return to a previous state for trying possible alternatives. It admits the construction of goal-oriented or goal-seeking models with variable structure. Models are defined in a restricted version of the first order predicate calculus using Horn clauses. 21 references.

  19. Evaluating TCMS Train-to-Ground communication performances based on the LTE technology and discreet event simulations

    DEFF Research Database (Denmark)

    Bouaziz, Maha; Yan, Ying; Kassab, Mohamed

    2018-01-01

    is shared between the train and different passengers. The simulation is based on the discrete-events network simulator Riverbed Modeler. Next, second step focusses on a co-simulation testbed, to evaluate performances with real traffic based on Hardware-In-The-Loop and OpenAirInterface modules. Preliminary...... (Long Term Evolution) network as an alternative communication technology, instead of GSM-R (Global System for Mobile communications-Railway) because of some capacity and capability limits. First step, a pure simulation is used to evaluate the network load for a high-speed scenario, when the LTE network...... simulation and co-simulation results show that LTE provides good performance for the TCMS traffic exchange in terms of packet delay and data integrity...

  20. Event-by-event simulation of quantum phenomena : Application to Einstein-Podolosky-Rosen-Bohm experiments

    NARCIS (Netherlands)

    De Raedt, H.; De Raedt, K.; Michielsen, K.; Keimpema, K.; Miyashita, S.

    We review the data gathering and analysis procedure used in real E instein-Podolsky-Rosen-Bohm experiments with photons and we illustrate the procedure by analyzing experimental data. Based on this analysis, we construct event-based computer simulation models in which every essential element in the

  1. An Efficient Simulation Method for Rare Events

    KAUST Repository

    Rached, Nadhir B.

    2015-01-07

    Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.

  2. An Efficient Simulation Method for Rare Events

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.

  3. Discrete Event Simulation Model of the Polaris 2.1 Gamma Ray Imaging Radiation Detection Device

    Science.gov (United States)

    2016-06-01

    release; distribution is unlimited DISCRETE EVENT SIMULATION MODEL OF THE POLARIS 2.1 GAMMA RAY IMAGING RADIATION DETECTION DEVICE by Andres T...ONLY (Leave blank) 2. REPORT DATE June 2016 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE DISCRETE EVENT SIMULATION MODEL...modeled. The platform, Simkit, was utilized to create a discrete event simulation (DES) model of the Polaris. After carefully constructing the DES

  4. A study for the sequence of events (SOE) system on the nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byung Chae; Jeon, Jong Sun; Lee, Sun Sung; Lee, Kyung Ho; Lee, Byung Ju; Sohn, Kwang Young [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-06-01

    It is important to identify where and why an event or a trip is occurred in the Nuclear Power Plant(NPP) and to provide proper resolution against above situation. In order to analyze the prime cause or conspicuous reason of trouble occurred after events or trips occur, the Sequence of Events(SOE) system has been adopted in Korean NPP to acquire the sequential information along where and when an event or a trip take place. The SOE system of UCN 3 and 4 plant which is included in the Plant Data Acquisition System (PDAS), shares the 3205 computer and system software with PDAS. Sharing of the computer H/w and S/W, however, requires more complicated process to provide the events or trip signals due to the inherent characteristics of the shared system. Moreover there are high potentiality of collision between synchronization signals and data transmitted to the Plant Computer System (PCS), when the synchronization signals are sent from PCS to the three SOE processors. When this collision happens the SOE system will break down, thus it is not possible to analyze the trend of events or trips. An independent SOE system composed with single processor is proposed in this paper. To begin with, the analyses on the hardware and software of SOE and PDAS system of UCN 3 and 4 were performed to justify the problems and the resolution if it exists. In order to test the new SOE system, VMEbus, VM30 CPU, change of status I/O card and OS-9 for the operating system were adopted and the analysis for this test system was done as follows; the verification should be achieved through the simulation; the simulated signals for events are given the test system as inputs and the outputs are monitored to verify whether the sequential events logging function works well or not on PC. In conclusion, this report is expected to provide the technical background for the improvement and changing of the NPP PDAS and SOE system in the future. 18 tabs., 33 figs., 26 refs. (Author) .new.

  5. A study for the sequence of events (SOE) system on the nuclear power plant

    International Nuclear Information System (INIS)

    Lee, Byung Chae; Jeon, Jong Sun; Lee, Sun Sung; Lee, Kyung Ho; Lee, Byung Ju; Sohn, Kwang Young

    1996-06-01

    It is important to identify where and why an event or a trip is occurred in the Nuclear Power Plant(NPP) and to provide proper resolution against above situation. In order to analyze the prime cause or conspicuous reason of trouble occurred after events or trips occur, the Sequence of Events(SOE) system has been adopted in Korean NPP to acquire the sequential information along where and when an event or a trip take place. The SOE system of UCN 3 and 4 plant which is included in the Plant Data Acquisition System (PDAS), shares the 3205 computer and system software with PDAS. Sharing of the computer H/w and S/W, however, requires more complicated process to provide the events or trip signals due to the inherent characteristics of the shared system. Moreover there are high potentiality of collision between synchronization signals and data transmitted to the Plant Computer System (PCS), when the synchronization signals are sent from PCS to the three SOE processors. When this collision happens the SOE system will break down, thus it is not possible to analyze the trend of events or trips. An independent SOE system composed with single processor is proposed in this paper. To begin with, the analyses on the hardware and software of SOE and PDAS system of UCN 3 and 4 were performed to justify the problems and the resolution if it exists. In order to test the new SOE system, VMEbus, VM30 CPU, change of status I/O card and OS-9 for the operating system were adopted and the analysis for this test system was done as follows; the verification should be achieved through the simulation; the simulated signals for events are given the test system as inputs and the outputs are monitored to verify whether the sequential events logging function works well or not on PC. In conclusion, this report is expected to provide the technical background for the improvement and changing of the NPP PDAS and SOE system in the future. 18 tabs., 33 figs., 26 refs. (Author) .new

  6. SPEEDES - A multiple-synchronization environment for parallel discrete-event simulation

    Science.gov (United States)

    Steinman, Jeff S.

    1992-01-01

    Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) is a unified parallel simulation environment. It supports multiple-synchronization protocols without requiring users to recompile their code. When a SPEEDES simulation runs on one node, all the extra parallel overhead is removed automatically at run time. When the same executable runs in parallel, the user preselects the synchronization algorithm from a list of options. SPEEDES currently runs on UNIX networks and on the California Institute of Technology/Jet Propulsion Laboratory Mark III Hypercube. SPEEDES also supports interactive simulations. Featured in the SPEEDES environment is a new parallel synchronization approach called Breathing Time Buckets. This algorithm uses some of the conservative techniques found in Time Bucket synchronization, along with the optimism that characterizes the Time Warp approach. A mathematical model derived from first principles predicts the performance of Breathing Time Buckets. Along with the Breathing Time Buckets algorithm, this paper discusses the rules for processing events in SPEEDES, describes the implementation of various other synchronization protocols supported by SPEEDES, describes some new ones for the future, discusses interactive simulations, and then gives some performance results.

  7. Discrete Event Supervisory Control Applied to Propulsion Systems

    Science.gov (United States)

    Litt, Jonathan S.; Shah, Neerav

    2005-01-01

    The theory of discrete event supervisory (DES) control was applied to the optimal control of a twin-engine aircraft propulsion system and demonstrated in a simulation. The supervisory control, which is implemented as a finite-state automaton, oversees the behavior of a system and manages it in such a way that it maximizes a performance criterion, similar to a traditional optimal control problem. DES controllers can be nested such that a high-level controller supervises multiple lower level controllers. This structure can be expanded to control huge, complex systems, providing optimal performance and increasing autonomy with each additional level. The DES control strategy for propulsion systems was validated using a distributed testbed consisting of multiple computers--each representing a module of the overall propulsion system--to simulate real-time hardware-in-the-loop testing. In the first experiment, DES control was applied to the operation of a nonlinear simulation of a turbofan engine (running in closed loop using its own feedback controller) to minimize engine structural damage caused by a combination of thermal and structural loads. This enables increased on-wing time for the engine through better management of the engine-component life usage. Thus, the engine-level DES acts as a life-extending controller through its interaction with and manipulation of the engine s operation.

  8. Powering stochastic reliability models by discrete event simulation

    DEFF Research Database (Denmark)

    Kozine, Igor; Wang, Xiaoyun

    2012-01-01

    it difficult to find a solution to the problem. The power of modern computers and recent developments in discrete-event simulation (DES) software enable to diminish some of the drawbacks of stochastic models. In this paper we describe the insights we have gained based on using both Markov and DES models...

  9. Optimized Parallel Discrete Event Simulation (PDES) for High Performance Computing (HPC) Clusters

    National Research Council Canada - National Science Library

    Abu-Ghazaleh, Nael

    2005-01-01

    The aim of this project was to study the communication subsystem performance of state of the art optimistic simulator Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES...

  10. Rare Event Simulation in Radiation Transport

    Science.gov (United States)

    Kollman, Craig

    This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous

  11. Revised licensee event report system

    International Nuclear Information System (INIS)

    Mays, G.T.; Poore, W.P.

    1985-01-01

    Licensee Event Reports (LERs) provide the basis for evaluating and assessing operating experience information from nuclear power plants. The reporting requirements for submitting LERs to the Nuclear Regulatory Commission have been revised. Effective Jan. 1, 1984, all events were to be submitted in accordance with 10 CFR 50.73 of the Code of Federal Regulations. Report NUREG-1022, Licensee Event Report System-Description of System and Guidelines for Reporting, describes the guidelines on reportability of events. This article summarizes the reporting requirements as presented in NUREG-1022, high-lights differences in data reported between the revised and previous LER systems, and presents results from a preliminary assessment of LERs submitted under the revised LER reporting system

  12. The Advanced Photon Source event system

    International Nuclear Information System (INIS)

    Lenkszus, F.R.; Laird, R.

    1995-01-01

    The Advanced Photon Source, like many other facilities, requires a means of transmitting timing information to distributed control system 1/0 controllers. The APS event system provides the means of distributing medium resolution/accuracy timing events throughout the facility. It consists of VME event generators and event receivers which are interconnected with 10OMbit/sec fiber optic links at distances of up to 650m in either a star or a daisy chain configuration. The systems event throughput rate is 1OMevents/sec with a peak-to-peak timing jitter down to lOOns depending on the source of the event. It is integrated into the EPICS-based A.PS control system through record and device support. Event generators broadcast timing events over fiber optic links to event receivers which are programmed to decode specific events. Event generators generate events in response to external inputs, from internal programmable event sequence RAMS, and from VME bus writes. The event receivers can be programmed to generate both pulse and set/reset level outputs to synchronize hardware, and to generate interrupts to initiate EPICS record processing. In addition, each event receiver contains a time stamp counter which is used to provide synchronized time stamps to EPICS records

  13. Modelling and real-time simulation of continuous-discrete systems in mechatronics

    Energy Technology Data Exchange (ETDEWEB)

    Lindow, H. [Rostocker, Magdeburg (Germany)

    1996-12-31

    This work presents a methodology for simulation and modelling of systems with continuous - discrete dynamics. It derives hybrid discrete event models from Lagrange`s equations of motion. This method combines continuous mechanical, electrical and thermodynamical submodels on one hand with discrete event models an the other hand into a hybrid discrete event model. This straight forward software development avoids numeric overhead.

  14. Event Registration System for INR Linac

    International Nuclear Information System (INIS)

    Grekhov, O.V.; Drugakov, A.N.; Kiselev, Yu.V.

    2006-01-01

    The software of the Event registration system for the linear accelerators is described. This system allows receiving of the information on changes of operating modes of the accelerator and supervising of hundreds of key parameters of various systems of the accelerator. The Event registration system consists of the source and listeners of events. The sources of events are subroutines built in existing ACS Linac. The listeners of events are software Supervisor and Client ERS. They are used for warning the operator about change controlled parameter of the accelerator

  15. A conceptual modeling framework for discrete event simulation using hierarchical control structures

    Science.gov (United States)

    Furian, N.; O’Sullivan, M.; Walker, C.; Vössner, S.; Neubacher, D.

    2015-01-01

    Conceptual Modeling (CM) is a fundamental step in a simulation project. Nevertheless, it is only recently that structured approaches towards the definition and formulation of conceptual models have gained importance in the Discrete Event Simulation (DES) community. As a consequence, frameworks and guidelines for applying CM to DES have emerged and discussion of CM for DES is increasing. However, both the organization of model-components and the identification of behavior and system control from standard CM approaches have shortcomings that limit CM’s applicability to DES. Therefore, we discuss the different aspects of previous CM frameworks and identify their limitations. Further, we present the Hierarchical Control Conceptual Modeling framework that pays more attention to the identification of a models’ system behavior, control policies and dispatching routines and their structured representation within a conceptual model. The framework guides the user step-by-step through the modeling process and is illustrated by a worked example. PMID:26778940

  16. A conceptual modeling framework for discrete event simulation using hierarchical control structures.

    Science.gov (United States)

    Furian, N; O'Sullivan, M; Walker, C; Vössner, S; Neubacher, D

    2015-08-01

    Conceptual Modeling (CM) is a fundamental step in a simulation project. Nevertheless, it is only recently that structured approaches towards the definition and formulation of conceptual models have gained importance in the Discrete Event Simulation (DES) community. As a consequence, frameworks and guidelines for applying CM to DES have emerged and discussion of CM for DES is increasing. However, both the organization of model-components and the identification of behavior and system control from standard CM approaches have shortcomings that limit CM's applicability to DES. Therefore, we discuss the different aspects of previous CM frameworks and identify their limitations. Further, we present the Hierarchical Control Conceptual Modeling framework that pays more attention to the identification of a models' system behavior, control policies and dispatching routines and their structured representation within a conceptual model. The framework guides the user step-by-step through the modeling process and is illustrated by a worked example.

  17. Simulation and modeling of data acquisition systems for future high energy physics experiments

    International Nuclear Information System (INIS)

    Booth, A.; Black, D.; Walsh, D.; Bowden, M.; Barsotti, E.

    1991-01-01

    With the ever-increasing complexity of detectors and their associated data acquisition (DAQ) systems, it is important to bring together a set of tools to enable system designers, both hardware and software, to understand the behavioral aspects of the system was a whole, as well as the interaction between different functional units within the system. For complex systems, human intuition is inadequate since there are simply too many variables for system designers to begin to predict how varying any subset of them affects the total system. On the other hand, exact analysis, even to the extent of investing in disposable hardware prototypes, is much too time consuming and costly. Simulation bridges the gap between physical intuition and exact analysis by providing a learning vehicle in which the effects of varying many parameters can be analyzed and understood. Simulation techniques are being used in the development of the Scalable Parallel Open Architecture Data Acquisition System at Fermilab in which several sophisticated tools have been brought together to provide an integrated systems engineering environment specifically aimed at designing, DAQ systems. Also presented are results of simulation experiments in which the effects of varying trigger rates, event sizes and event distribution over processors, are clearly seen in terms of throughput and buffer usage in an event-building switch

  18. Rare event simulation in finite-infinite dimensional space

    International Nuclear Information System (INIS)

    Au, Siu-Kui; Patelli, Edoardo

    2016-01-01

    Modern engineering systems are becoming increasingly complex. Assessing their risk by simulation is intimately related to the efficient generation of rare failure events. Subset Simulation is an advanced Monte Carlo method for risk assessment and it has been applied in different disciplines. Pivotal to its success is the efficient generation of conditional failure samples, which is generally non-trivial. Conventionally an independent-component Markov Chain Monte Carlo (MCMC) algorithm is used, which is applicable to high dimensional problems (i.e., a large number of random variables) without suffering from ‘curse of dimension’. Experience suggests that the algorithm may perform even better for high dimensional problems. Motivated by this, for any given problem we construct an equivalent problem where each random variable is represented by an arbitrary (hence possibly infinite) number of ‘hidden’ variables. We study analytically the limiting behavior of the algorithm as the number of hidden variables increases indefinitely. This leads to a new algorithm that is more generic and offers greater flexibility and control. It coincides with an algorithm recently suggested by independent researchers, where a joint Gaussian distribution is imposed between the current sample and the candidate. The present work provides theoretical reasoning and insights into the algorithm.

  19. Monte Carlo generator ELRADGEN 2.0 for simulation of radiative events in elastic ep-scattering of polarized particles

    Science.gov (United States)

    Akushevich, I.; Filoti, O. F.; Ilyichev, A.; Shumeiko, N.

    2012-07-01

    The structure and algorithms of the Monte Carlo generator ELRADGEN 2.0 designed to simulate radiative events in polarized ep-scattering are presented. The full set of analytical expressions for the QED radiative corrections is presented and discussed in detail. Algorithmic improvements implemented to provide faster simulation of hard real photon events are described. Numerical tests show high quality of generation of photonic variables and radiatively corrected cross section. The comparison of the elastic radiative tail simulated within the kinematical conditions of the BLAST experiment at MIT BATES shows a good agreement with experimental data. Catalogue identifier: AELO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1299 No. of bytes in distributed program, including test data, etc.: 11 348 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: All Operating system: Any RAM: 1 MB Classification: 11.2, 11.4 Nature of problem: Simulation of radiative events in polarized ep-scattering. Solution method: Monte Carlo simulation according to the distributions of the real photon kinematic variables that are calculated by the covariant method of QED radiative correction estimation. The approach provides rather fast and accurate generation. Running time: The simulation of 108 radiative events for itest:=1 takes up to 52 seconds on Pentium(R) Dual-Core 2.00 GHz processor.

  20. Discrete-Event Simulation in Chemical Engineering.

    Science.gov (United States)

    Schultheisz, Daniel; Sommerfeld, Jude T.

    1988-01-01

    Gives examples, descriptions, and uses for various types of simulation systems, including the Flowtran, Process, Aspen Plus, Design II, GPSS, Simula, and Simscript. Explains similarities in simulators, terminology, and a batch chemical process. Tables and diagrams are included. (RT)

  1. Behavior coordination of mobile robotics using supervisory control of fuzzy discrete event systems.

    Science.gov (United States)

    Jayasiri, Awantha; Mann, George K I; Gosine, Raymond G

    2011-10-01

    In order to incorporate the uncertainty and impreciseness present in real-world event-driven asynchronous systems, fuzzy discrete event systems (DESs) (FDESs) have been proposed as an extension to crisp DESs. In this paper, first, we propose an extension to the supervisory control theory of FDES by redefining fuzzy controllable and uncontrollable events. The proposed supervisor is capable of enabling feasible uncontrollable and controllable events with different possibilities. Then, the extended supervisory control framework of FDES is employed to model and control several navigational tasks of a mobile robot using the behavior-based approach. The robot has limited sensory capabilities, and the navigations have been performed in several unmodeled environments. The reactive and deliberative behaviors of the mobile robotic system are weighted through fuzzy uncontrollable and controllable events, respectively. By employing the proposed supervisory controller, a command-fusion-type behavior coordination is achieved. The observability of fuzzy events is incorporated to represent the sensory imprecision. As a systematic analysis of the system, a fuzzy-state-based controllability measure is introduced. The approach is implemented in both simulation and real time. A performance evaluation is performed to quantitatively estimate the validity of the proposed approach over its counterparts.

  2. Adaptive Event-Triggered Control Based on Heuristic Dynamic Programming for Nonlinear Discrete-Time Systems.

    Science.gov (United States)

    Dong, Lu; Zhong, Xiangnan; Sun, Changyin; He, Haibo

    2017-07-01

    This paper presents the design of a novel adaptive event-triggered control method based on the heuristic dynamic programming (HDP) technique for nonlinear discrete-time systems with unknown system dynamics. In the proposed method, the control law is only updated when the event-triggered condition is violated. Compared with the periodic updates in the traditional adaptive dynamic programming (ADP) control, the proposed method can reduce the computation and transmission cost. An actor-critic framework is used to learn the optimal event-triggered control law and the value function. Furthermore, a model network is designed to estimate the system state vector. The main contribution of this paper is to design a new trigger threshold for discrete-time systems. A detailed Lyapunov stability analysis shows that our proposed event-triggered controller can asymptotically stabilize the discrete-time systems. Finally, we test our method on two different discrete-time systems, and the simulation results are included.

  3. The devil is in the details: Comparisons of episodic simulations of positive and negative future events.

    Science.gov (United States)

    Puig, Vannia A; Szpunar, Karl K

    2017-08-01

    Over the past decade, psychologists have devoted considerable attention to episodic simulation-the ability to imagine specific hypothetical events. Perhaps one of the most consistent patterns of data to emerge from this literature is that positive simulations of the future are rated as more detailed than negative simulations of the future, a pattern of results that is commonly interpreted as evidence for a positivity bias in future thinking. In the present article, we demonstrate across two experiments that negative future events are consistently simulated in more detail than positive future events when frequency of prior thinking is taken into account as a possible confounding variable and when level of detail associated with simulated events is assessed using an objective scoring criterion. Our findings are interpreted in the context of the mobilization-minimization hypothesis of event cognition that suggests people are especially likely to devote cognitive resources to processing negative scenarios. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. A Framework for the Optimization of Discrete-Event Simulation Models

    Science.gov (United States)

    Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.

    1996-01-01

    With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.

  5. Application of Discrete Event Simulation in Mine Production Forecast

    African Journals Online (AJOL)

    Application of Discrete Event Simulation in Mine Production Forecast. Felix Adaania Kaba, Victor Amoako Temeng, Peter Arroja Eshun. Abstract. Mine production forecast is pertinent to mining as it serves production goals for a production period. Perseus Mining Ghana Limited (PMGL), Ayanfuri, deterministically forecasts ...

  6. How to apply the Score-Function method to standard discrete event simulation tools in order to optimise a set of system parameters simultaneously: A Job-Shop example will be discussed

    DEFF Research Database (Denmark)

    Nielsen, Erland Hejn

    2000-01-01

    During the last 1-2 decades, simulation optimisation of discrete event dynamic systems (DEDS) has made considerable theoretical progress with respect to computational efficiency. The score-function (SF) method and the infinitesimal perturbation analysis (IPA) are two candidates belonging to this ...

  7. The simulation library of the Belle II software system

    Science.gov (United States)

    Kim, D. Y.; Ritter, M.; Bilka, T.; Bobrov, A.; Casarosa, G.; Chilikin, K.; Ferber, T.; Godang, R.; Jaegle, I.; Kandra, J.; Kodys, P.; Kuhr, T.; Kvasnicka, P.; Nakayama, H.; Piilonen, L.; Pulvermacher, C.; Santelj, L.; Schwenker, B.; Sibidanov, A.; Soloviev, Y.; Starič, M.; Uglov, T.

    2017-10-01

    SuperKEKB, the next generation B factory, has been constructed in Japan as an upgrade of KEKB. This brand new e+ e- collider is expected to deliver a very large data set for the Belle II experiment, which will be 50 times larger than the previous Belle sample. Both the triggered physics event rate and the background event rate will be increased by at least 10 times than the previous ones, and will create a challenging data taking environment for the Belle II detector. The software system of the Belle II experiment is designed to execute this ambitious plan. A full detector simulation library, which is a part of the Belle II software system, is created based on Geant4 and has been tested thoroughly. Recently the library has been upgraded with Geant4 version 10.1. The library is behaving as expected and it is utilized actively in producing Monte Carlo data sets for various studies. In this paper, we will explain the structure of the simulation library and the various interfaces to other packages including geometry and beam background simulation.

  8. Uncertainties Related to Extreme Event Statistics of Sewer System Surcharge and Overflow

    DEFF Research Database (Denmark)

    Schaarup-Jensen, Kjeld; Johansen, C.; Thorndahl, Søren Liedtke

    2005-01-01

    Today it is common practice - in the major part of Europe - to base design of sewer systems in urban areas on recommended minimum values of flooding frequencies related to either pipe top level, basement level in buildings or level of road surfaces. Thus storm water runoff in sewer systems is only...... proceeding in an acceptable manner, if flooding of these levels is having an average return period bigger than a predefined value. This practice is also often used in functional analysis of existing sewer systems. If a sewer system can fulfil recommended flooding frequencies or not, can only be verified...... by performing long term simulations - using a sewer flow simulation model - and draw up extreme event statistics from the model simulations. In this context it is important to realize that uncertainties related to the input parameters of rainfall runoff models will give rise to uncertainties related...

  9. Experimental verification of integrated pressure suppression systems in fusion reactors at in-vessel loss-of-coolant events

    International Nuclear Information System (INIS)

    Takase, K.; Akimoto, H.

    2001-01-01

    An integrated ICE (Ingress-of-Coolant Event) test facility was constructed to demonstrate that the ITER safety design approach and design parameters for the ICE events are adequate. Major objectives of the integrated ICE test facility are: to estimate the performance of an integrated pressure suppression system; to obtain the validation data for safety analysis codes; and to clarify the effects of two-phase pressure drop at a divertor and the direct-contact condensation in a suppression tank. A scaling factor between the test facility and ITER-FEAT is around 1/1600. The integrated ICE test facility simulates the ITER pressure suppression system and mainly consists of a plasma chamber, vacuum vessel, simulated divertor, relief pipe and suppression tank. From the experimental results it was found quantitatively that the ITER pressure suppression system is very effective to reduce the pressurization due to the ICE event. Furthermore, it was confirmed that the analytical results of the TRAC-PF1 code can simulate the experimental results with high accuracy. (author)

  10. Connecting macroscopic observables and microscopic assembly events in amyloid formation using coarse grained simulations.

    Directory of Open Access Journals (Sweden)

    Noah S Bieler

    Full Text Available The pre-fibrillar stages of amyloid formation have been implicated in cellular toxicity, but have proved to be challenging to study directly in experiments and simulations. Rational strategies to suppress the formation of toxic amyloid oligomers require a better understanding of the mechanisms by which they are generated. We report Dynamical Monte Carlo simulations that allow us to study the early stages of amyloid formation. We use a generic, coarse-grained model of an amyloidogenic peptide that has two internal states: the first one representing the soluble random coil structure and the second one the [Formula: see text]-sheet conformation. We find that this system exhibits a propensity towards fibrillar self-assembly following the formation of a critical nucleus. Our calculations establish connections between the early nucleation events and the kinetic information available in the later stages of the aggregation process that are commonly probed in experiments. We analyze the kinetic behaviour in our simulations within the framework of the theory of classical nucleated polymerisation, and are able to connect the structural events at the early stages in amyloid growth with the resulting macroscopic observables such as the effective nucleus size. Furthermore, the free-energy landscapes that emerge from these simulations allow us to identify pertinent properties of the monomeric state that could be targeted to suppress oligomer formation.

  11. ReDecay, a method to re-use the underlying events to speed up the simulation in LHCb

    CERN Multimedia

    Muller, Dominik

    2017-01-01

    With the steady increase in the precision of flavour physics measurements collected during LHC Run 2, the LHCb experiment requires simulated data samples of ever increasing magnitude to study the detector response in detail. However, relying on an increase of available computing power for the production of simulated events will not suffice to achieve this goal. The simulation of the detector response is the main contribution to the time needed to generate a sample, that scales linearly with the particles multiplicity of the event. Of the dozens of particles present in the simulation only a few, namely those participating in the studied signal decay, are of particular interest, while all remaining ones, the so-called underlying event, mainly affect the resolution and efficiencies of the detector. This talk presents a novel development for the LHCb simulation software which re-uses the underlying event from previously simulated events. This approach achieves an order of magnitude increase in speed and the same ...

  12. Comparative Study of Aircraft Boarding Strategies Using Cellular Discrete Event Simulation

    Directory of Open Access Journals (Sweden)

    Shafagh Jafer

    2017-11-01

    Full Text Available Time is crucial in the airlines industry. Among all factors contributing to an aircraft turnaround time; passenger boarding delays is the most challenging one. Airlines do not have control over the behavior of passengers; thus, focusing their effort on reducing passenger boarding time through implementing efficient boarding strategies. In this work, we attempt to use cellular Discrete-Event System Specification (Cell-DEVS modeling and simulation to provide a comprehensive evaluation of aircraft boarding strategies. We have developed a simulation benchmark consisting of eight boarding strategies including Back-to-Front; Window Middle Aisle; Random; Zone Rotate; Reverse Pyramid; Optimal; Optimal Practical; and Efficient. Our simulation models are scalable and adaptive; providing a powerful analysis apparatus for investigating any existing or yet to be discovered boarding strategy. We explain the details of our models and present the results both visually and numerically to evaluate the eight implemented boarding strategies. We also compare our results with other studies that have used different modeling techniques; reporting nearly identical performance results. The simulations revealed that Window Middle Aisle provides the least boarding delay; with a small fraction of time difference compared to the optimal strategy. The results of this work could highly benefit the commercial airlines industry by optimizing and reducing passenger boarding delays.

  13. Simulation and modeling of data acquisition systems for future high energy physics experiments

    International Nuclear Information System (INIS)

    Booth, A.; Black, D.; Walsh, D.; Bowden, M.; Barsotti, E.

    1990-01-01

    With the ever-increasing complexity of detectors and their associated data acquisition (DAQ) systems, it is important to bring together a set of tools to enable system designers, both hardware and software, to understand the behavorial aspects of the system as a whole, as well as the interaction between different functional units within the system. For complex systems, human intuition is inadequate since there are simply too many variables for system designers to begin to predict how varying any subset of them affects the total system. On the other hand, exact analysis, even to the extent of investing in disposable hardware prototypes, is much too time consuming and costly. Simulation bridges the gap between physical intuition and exact analysis by providing a learning vehicle in which the effects of varying many parameters can be analyzed and understood. Simulation techniques are being used in the development of the Scalable Parallel Open Architecture Data Acquisition System at Fermilab. This paper describes the work undertaken at Fermilab in which several sophisticated tools have been brought together to provide an integrated systems engineering environment specifically aimed at designing DAQ systems. Also presented are results of simulation experiments in which the effects of varying trigger rates, event sizes and event distribution over processors, are clearly seen in terms of throughput and buffer usage in an event-building switch

  14. Advances in Discrete-Event Simulation for MSL Command Validation

    Science.gov (United States)

    Patrikalakis, Alexander; O'Reilly, Taifun

    2013-01-01

    In the last five years, the discrete event simulator, SEQuence GENerator (SEQGEN), developed at the Jet Propulsion Laboratory to plan deep-space missions, has greatly increased uplink operations capacity to deal with increasingly complicated missions. In this paper, we describe how the Mars Science Laboratory (MSL) project makes full use of an interpreted environment to simulate change in more than fifty thousand flight software parameters and conditional command sequences to predict the result of executing a conditional branch in a command sequence, and enable the ability to warn users whenever one or more simulated spacecraft states change in an unexpected manner. Using these new SEQGEN features, operators plan more activities in one sol than ever before.

  15. DESIGNING AN EVENT EXTRACTION SYSTEM

    Directory of Open Access Journals (Sweden)

    Botond BENEDEK

    2017-06-01

    Full Text Available In the Internet world, the amount of information available reaches very high quotas. In order to find specific information, some tools were created that automatically scroll through the existing web pages and update their databases with the latest information on the Internet. In order to systematize the search and achieve a result in a concrete form, another step is needed for processing the information returned by the search engine and generating the response in a more organized form. Centralizing events of a certain type is useful first of all for creating a news service. Through this system we are pursuing a knowledge - events from the Internet documents - extraction system. The system will recognize events of a certain type (weather, sports, politics, text data mining, etc. depending on how it will be trained (the concept it has in the dictionary. These events can be provided to the user, or it can also extract the context in which the event occurred, to indicate the initial form in which the event was embedded.

  16. High-speed extended-term time-domain simulation for online cascading analysis of power system

    Science.gov (United States)

    Fu, Chuan

    A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have

  17. A View on Future Building System Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wetter, Michael

    2011-04-01

    This chapter presents what a future environment for building system modeling and simulation may look like. As buildings continue to require increased performance and better comfort, their energy and control systems are becoming more integrated and complex. We therefore focus in this chapter on the modeling, simulation and analysis of building energy and control systems. Such systems can be classified as heterogeneous systems because they involve multiple domains, such as thermodynamics, fluid dynamics, heat and mass transfer, electrical systems, control systems and communication systems. Also, they typically involve multiple temporal and spatial scales, and their evolution can be described by coupled differential equations, discrete equations and events. Modeling and simulating such systems requires a higher level of abstraction and modularisation to manage the increased complexity compared to what is used in today's building simulation programs. Therefore, the trend towards more integrated building systems is likely to be a driving force for changing the status quo of today's building simulation programs. Thischapter discusses evolving modeling requirements and outlines a path toward a future environment for modeling and simulation of heterogeneous building systems.A range of topics that would require many additional pages of discussion has been omitted. Examples include computational fluid dynamics for air and particle flow in and around buildings, people movement, daylight simulation, uncertainty propagation and optimisation methods for building design and controls. For different discussions and perspectives on the future of building modeling and simulation, we refer to Sahlin (2000), Augenbroe (2001) and Malkawi and Augenbroe (2004).

  18. Event-by-event simulation of quantum phenomena

    NARCIS (Netherlands)

    Raedt, H. De; Raedt, K. De; Michielsen, K.; Landau, DP; Lewis, SP; Schuttler, HB

    2006-01-01

    In various basic experiments in quantum physics, observations are recorded event-by-event. The final outcome of such experiments can be computed according to the rules of quantum theory but quantum theory does not describe single events. In this paper, we describe a stimulation approach that does

  19. Numerical simulation of internal reconnection event in spherical tokamak

    International Nuclear Information System (INIS)

    Hayashi, Takaya; Mizuguchi, Naoki; Sato, Tetsuya

    1999-07-01

    Three-dimensional magnetohydrodynamic simulations are executed in a full toroidal geometry to clarify the physical mechanisms of the Internal Reconnection Event (IRE), which is observed in the spherical tokamak experiments. The simulation results reproduce several main properties of IRE. Comparison between the numerical results and experimental observation indicates fairly good agreements regarding nonlinear behavior, such as appearance of localized helical distortion, appearance of characteristic conical shape in the pressure profile during thermal quench, and subsequent appearance of the m=2/n=1 type helical distortion of the torus. (author)

  20. Design and simulation of a totally digital image system for medical image applications

    International Nuclear Information System (INIS)

    Archwamety, C.

    1987-01-01

    The Totally Digital Imaging System (TDIS) is based on system requirements information from the Radiology Department, University of Arizona Health Science Center. This dissertation presents the design of this complex system, the TDIS specification, the system performance requirements, and the evaluation of the system using the computer-simulation programs. Discrete-event simulation models were developed for the TDIS subsystems, including an image network, imaging equipment, storage migration algorithm, data base archive system, and a control and management network. The simulation system uses empirical data generation and retrieval rates measured at the University Medical Center hospital. The entire TDIS system was simulated in Simscript II.5 using a VAX 8600 computer system. Simulation results show the fiber-optical-image network to be suitable; however, the optical-disk-storage system represents a performance bottleneck

  1. A hadron-nucleus collision event generator for simulations at intermediate energies

    CERN Document Server

    Ackerstaff, K; Bollmann, R

    2002-01-01

    Several available codes for hadronic event generation and shower simulation are discussed and their predictions are compared to experimental data in order to obtain a satisfactory description of hadronic processes in Monte Carlo studies of detector systems for medium energy experiments. The most reasonable description is found for the intra-nuclear-cascade (INC) model of Bertini which employs microscopic description of the INC, taking into account elastic and inelastic pion-nucleon and nucleon-nucleon scattering. The isobar model of Sternheimer and Lindenbaum is used to simulate the inelastic elementary collisions inside the nucleus via formation and decay of the DELTA sub 3 sub 3 -resonance which, however, limits the model at higher energies. To overcome this limitation, the INC model has been extended by using the resonance model of the HADRIN code, considering all resonances in elementary collisions contributing more than 2% to the total cross-section up to kinetic energies of 5 GeV. In addition, angular d...

  2. Markov modeling and discrete event simulation in health care: a systematic comparison.

    Science.gov (United States)

    Standfield, Lachlan; Comans, Tracy; Scuffham, Paul

    2014-04-01

    The aim of this study was to assess if the use of Markov modeling (MM) or discrete event simulation (DES) for cost-effectiveness analysis (CEA) may alter healthcare resource allocation decisions. A systematic literature search and review of empirical and non-empirical studies comparing MM and DES techniques used in the CEA of healthcare technologies was conducted. Twenty-two pertinent publications were identified. Two publications compared MM and DES models empirically, one presented a conceptual DES and MM, two described a DES consensus guideline, and seventeen drew comparisons between MM and DES through the authors' experience. The primary advantages described for DES over MM were the ability to model queuing for limited resources, capture individual patient histories, accommodate complexity and uncertainty, represent time flexibly, model competing risks, and accommodate multiple events simultaneously. The disadvantages of DES over MM were the potential for model overspecification, increased data requirements, specialized expensive software, and increased model development, validation, and computational time. Where individual patient history is an important driver of future events an individual patient simulation technique like DES may be preferred over MM. Where supply shortages, subsequent queuing, and diversion of patients through other pathways in the healthcare system are likely to be drivers of cost-effectiveness, DES modeling methods may provide decision makers with more accurate information on which to base resource allocation decisions. Where these are not major features of the cost-effectiveness question, MM remains an efficient, easily validated, parsimonious, and accurate method of determining the cost-effectiveness of new healthcare interventions.

  3. Event simulation for the WA80 experiment

    International Nuclear Information System (INIS)

    Sorensen, S.P.

    1986-01-01

    The HIJET and LUND event generators are compared. It is concluded that for detector construction and design of experimental setups, the differences between the two models are marginal. The coverage of the WA80 setup in pseudorapidity and energy is demonstrated. The performance of some of the WA80 detectors (zero-degree calorimeter, wall calorimeter, multiplicity array, and SAPHIR lead-glass detector) is evaluated based on calculations with the LUND or the HIJET codes combined with codes simulating the detector responses. 9 refs., 3 figs

  4. Inter-Event Time Definition Setting Procedure for Urban Drainage Systems

    Directory of Open Access Journals (Sweden)

    Jingul Joo

    2013-12-01

    Full Text Available Traditional inter-event time definition (IETD estimate methodologies generally take into account only rainfall characteristics and not drainage basin characteristics. Therefore, they may not succeed in providing an appropriate value of IETD for any sort of application to the design of urban drainage system devices. To overcome this limitation, this study presents a method of IETD determination that considers basin characteristics. The suggested definition of IETD is the time period from the end of a rainfall event to the end of a direct runoff. The suggested method can identify the independent events that are suitable for the statistical analysis of the recorded rainfall. Using the suggested IETD, the IETD of the Joong-Rang drainage system was determined and the area-IETD relation curve was drawn. The resulting regression curve can be used to determinate the IETD of ungauged urban drainage systems, with areas ranging between 40 and 4400 ha. Using the regression curve, the IETDs and time distribution of the design rainfall for four drainage systems in Korea were determined and rainfall-runoff simulations were performed with the Storm Water Management Model (SWMM. The results were compared with those from Huff's method which assumed a six-hour IETD. The peak flow rates obtained by the suggested method were 11%~15% greater than those obtained by Huff’s method. The suggested IETD determination method can identify independent events that are suitable for the statistical analysis of the recorded rainfall aimed at the design of urban drainage system devices.

  5. Mixed-realism simulation of adverse event disclosure: an educational methodology and assessment instrument.

    Science.gov (United States)

    Matos, Francisco M; Raemer, Daniel B

    2013-04-01

    Physicians have an ethical duty to disclose adverse events to patients or families. Various strategies have been reported for teaching disclosure, but no instruments have been shown to be reliable for assessing them.The aims of this study were to report a structured method for teaching adverse event disclosure using mixed-realism simulation, develop and begin to validate an instrument for assessing performance, and describe the disclosure practice of anesthesiology trainees. Forty-two anesthesiology trainees participated in a 2-part exercise with mixed-realism simulation. The first part took place using a mannequin patient in a simulated operating room where trainees became enmeshed in a clinical episode that led to an adverse event and the second part in a simulated postoperative care unit where the learner is asked to disclose to a standardized patient who systematically moves through epochs of grief response. Two raters scored subjects using an assessment instrument we developed that combines a 4-element behaviorally anchored rating scale (BARS) and a 5-stage objective rating scale. The performance scores for elements within the BARS and the 5-stage instrument showed excellent interrater reliability (Cohen's κ = 0.7), appropriate range (mean range for BARS, 4.20-4.47; mean range for 5-stage instrument, 3.73-4.46), and high internal consistency (P realism simulation that engages learners in an adverse event and allows them to practice disclosure to a structured range of patient responses. We have developed a reliable 2-part instrument with strong psychometric properties for assessing disclosure performance.

  6. Workshop on data acquisition and trigger system simulations for high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-12-31

    This report discusses the following topics: DAQSIM: A data acquisition system simulation tool; Front end and DCC Simulations for the SDC Straw Tube System; Simulation of Non-Blocklng Data Acquisition Architectures; Simulation Studies of the SDC Data Collection Chip; Correlation Studies of the Data Collection Circuit & The Design of a Queue for this Circuit; Fast Data Compression & Transmission from a Silicon Strip Wafer; Simulation of SCI Protocols in Modsim; Visual Design with vVHDL; Stochastic Simulation of Asynchronous Buffers; SDC Trigger Simulations; Trigger Rates, DAQ & Online Processing at the SSC; Planned Enhancements to MODSEM II & SIMOBJECT -- an Overview -- R.; DAGAR -- A synthesis system; Proposed Silicon Compiler for Physics Applications; Timed -- LOTOS in a PROLOG Environment: an Algebraic language for Simulation; Modeling and Simulation of an Event Builder for High Energy Physics Data Acquisition Systems; A Verilog Simulation for the CDF DAQ; Simulation to Design with Verilog; The DZero Data Acquisition System: Model and Measurements; DZero Trigger Level 1.5 Modeling; Strategies Optimizing Data Load in the DZero Triggers; Simulation of the DZero Level 2 Data Acquisition System; A Fast Method for Calculating DZero Level 1 Jet Trigger Properties and Physics Input to DAQ Studies.

  7. Workshop on data acquisition and trigger system simulations for high energy physics

    International Nuclear Information System (INIS)

    1992-01-01

    This report discusses the following topics: DAQSIM: A data acquisition system simulation tool; Front end and DCC Simulations for the SDC Straw Tube System; Simulation of Non-Blocklng Data Acquisition Architectures; Simulation Studies of the SDC Data Collection Chip; Correlation Studies of the Data Collection Circuit ampersand The Design of a Queue for this Circuit; Fast Data Compression ampersand Transmission from a Silicon Strip Wafer; Simulation of SCI Protocols in Modsim; Visual Design with vVHDL; Stochastic Simulation of Asynchronous Buffers; SDC Trigger Simulations; Trigger Rates, DAQ ampersand Online Processing at the SSC; Planned Enhancements to MODSEM II ampersand SIMOBJECT -- an Overview -- R.; DAGAR -- A synthesis system; Proposed Silicon Compiler for Physics Applications; Timed -- LOTOS in a PROLOG Environment: an Algebraic language for Simulation; Modeling and Simulation of an Event Builder for High Energy Physics Data Acquisition Systems; A Verilog Simulation for the CDF DAQ; Simulation to Design with Verilog; The DZero Data Acquisition System: Model and Measurements; DZero Trigger Level 1.5 Modeling; Strategies Optimizing Data Load in the DZero Triggers; Simulation of the DZero Level 2 Data Acquisition System; A Fast Method for Calculating DZero Level 1 Jet Trigger Properties and Physics Input to DAQ Studies

  8. ESSE: Engineering Super Simulation Emulation for Virtual Reality Systems Environment

    International Nuclear Information System (INIS)

    Suh, Kune Y.; Yeon, Choul W.

    2008-01-01

    The trademark 4 + D Technology TM based Engineering Super Simulation Emulation (ESSE) is introduced. ESSE resorting to three-dimensional (3D) Virtual Reality (VR) technology pledges to provide with an interactive real-time motion, sound and tactile and other forms of feedback in the man machine systems environment. In particular, the 3D Virtual Engineering Neo cybernetic Unit Soft Power (VENUS) adds a physics engine to the VR platform so as to materialize a physical atmosphere. A close cooperation system and prompt information share are crucial, thereby increasing the necessity of centralized information system and electronic cooperation system. VENUS is further deemed to contribute towards public acceptance of nuclear power in general, and safety in particular. For instance, visualization of nuclear systems can familiarize the public in answering their questions and alleviating misunderstandings on nuclear power plants answering their questions and alleviating misunderstandings on nuclear power plants (NPPs) in general, and performance, security and safety in particular. An in-house flagship project Systemic Three-dimensional Engine Platform Prototype Engineering (STEPPE) endeavors to develop the Systemic Three-dimensional Engine Platform (STEP) for a variety of VR applications. STEP is home to a level system providing the whole visible scene of virtual engineering of man machine system environment. The system is linked with video monitoring that provides a 3D Computer Graphics (CG) visualization of major events. The database linked system provides easy access to relevant blueprints. The character system enables the operators easy access to visualization of major events. The database linked system provides easy access to relevant blueprints. The character system enables the operators to access the virtual systems by using their virtual characters. Virtually Engineered NPP Informative systems by using their virtual characters. Virtually Engineered NPP Informative

  9. Dermatopathology effects of simulated solar particle event radiation exposure in the porcine model.

    Science.gov (United States)

    Sanzari, Jenine K; Diffenderfer, Eric S; Hagan, Sarah; Billings, Paul C; Gridley, Daila S; Seykora, John T; Kennedy, Ann R; Cengel, Keith A

    2015-07-01

    The space environment exposes astronauts to risks of acute and chronic exposure to ionizing radiation. Of particular concern is possible exposure to ionizing radiation from a solar particle event (SPE). During an SPE, magnetic disturbances in specific regions of the Sun result in the release of intense bursts of ionizing radiation, primarily consisting of protons that have a highly variable energy spectrum. Thus, SPE events can lead to significant total body radiation exposures to astronauts in space vehicles and especially while performing extravehicular activities. Simulated energy profiles suggest that SPE radiation exposures are likely to be highest in the skin. In the current report, we have used our established miniature pig model system to evaluate the skin toxicity of simulated SPE radiation exposures that closely resemble the energy and fluence profile of the September, 1989 SPE using either conventional radiation (electrons) or proton simulated SPE radiation. Exposure of animals to electron or proton radiation led to dose-dependent increases in epidermal pigmentation, the presence of necrotic keratinocytes at the dermal-epidermal boundary and pigment incontinence, manifested by the presence of melanophages in the derm is upon histological examination. We also observed epidermal hyperplasia and a reduction in vascular density at 30 days following exposure to electron or proton simulated SPE radiation. These results suggest that the doses of electron or proton simulated SPE radiation results in significant skin toxicity that is quantitatively and qualitatively similar. Radiation-induced skin damage is often one of the first clinical signs of both acute and non-acute radiation injury where infection may occur, if not treated. In this report, histopathology analyses of acute radiation-induced skin injury are discussed. Copyright © 2015 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.

  10. A code for simulation of human failure events in nuclear power plants: SIMPROC

    International Nuclear Information System (INIS)

    Gil, Jesus; Fernandez, Ivan; Murcia, Santiago; Gomez, Javier; Marrao, Hugo; Queral, Cesar; Exposito, Antonio; Rodriguez, Gabriel; Ibanez, Luisa; Hortal, Javier; Izquierdo, Jose M.; Sanchez, Miguel; Melendez, Enrique

    2011-01-01

    Over the past years, many Nuclear Power Plant organizations have performed Probabilistic Safety Assessments to identify and understand key plant vulnerabilities. As part of enhancing the PSA quality, the Human Reliability Analysis is essential to make a realistic evaluation of safety and about the potential facility's weaknesses. Moreover, it has to be noted that HRA continues to be a large source of uncertainty in the PSAs. Within their current joint collaborative activities, Indizen, Universidad Politecnica de Madrid and Consejo de Seguridad Nuclear have developed the so-called SIMulator of PROCedures (SIMPROC), a tool aiming at simulate events related with human actions and able to interact with a plant simulation model. The tool helps the analyst to quantify the importance of human actions in the final plant state. Among others, the main goal of SIMPROC is to check the Emergency Operating Procedures being used by operating crew in order to lead the plant to a safe shutdown plant state. Currently SIMPROC is coupled with the SCAIS software package, but the tool is flexible enough to be linked to other plant simulation codes. SIMPROC-SCAIS applications are shown in the present article to illustrate the tool performance. The applications were developed in the framework of the Nuclear Energy Agency project on Safety Margin Assessment and Applications (SM2A). First an introductory example was performed to obtain the damage domain boundary of a selected sequence from a SBLOCA. Secondly, the damage domain area of a selected sequence from a loss of Component Cooling Water with a subsequent seal LOCA was calculated. SIMPROC simulates the corresponding human actions in both cases. The results achieved shown how the system can be adapted to a wide range of purposes such as Dynamic Event Tree delineation, Emergency Operating Procedures and damage domain search.

  11. Inter-Enterprise Planning of Manufacturing Systems Applying Simulation with IPR Protection

    Science.gov (United States)

    Mertins, Kai; Rabe, Markus

    Discrete Event Simulation is a well-proved method to analyse the dynamic behaviour of manufacturing systems. However, simulation application is still poor for external supply chains or virtual enterprises, encompassing several legal entities. Most conventional simulation systems provide no means to protect intellectual property rights (IPR), nor methods to support cross-enterprise teamwork. This paper describes a solution to keep enterprise models private, but still provide their functionality for cross-enterprise evaluation purposes. Applying the new modelling system, the inter-enterprise business process is specified by the user, including a specification of the objects exchanged between the local models. The required environment for a distributed simulation is generated automatically. The mechanisms have been tested with a large supply chain model.

  12. Device simulation of charge collection and single-event upset

    International Nuclear Information System (INIS)

    Dodd, P.E.

    1996-01-01

    In this paper the author reviews the current status of device simulation of ionizing-radiation-induced charge collection and single-event upset (SEU), with an emphasis on significant results of recent years. The author presents an overview of device-modeling techniques applicable to the SEU problem and the unique challenges this task presents to the device modeler. He examines unloaded simulations of radiation-induced charge collection in simple p/n diodes, SEU in dynamic random access memories (DRAM's), and SEU in static random access memories (SRAM's). The author concludes with a few thoughts on future issues likely to confront the SEU device modeler

  13. Developing Flexible Discrete Event Simulation Models in an Uncertain Policy Environment

    Science.gov (United States)

    Miranda, David J.; Fayez, Sam; Steele, Martin J.

    2011-01-01

    On February 1st, 2010 U.S. President Barack Obama submitted to Congress his proposed budget request for Fiscal Year 2011. This budget included significant changes to the National Aeronautics and Space Administration (NASA), including the proposed cancellation of the Constellation Program. This change proved to be controversial and Congressional approval of the program's official cancellation would take many months to complete. During this same period an end-to-end discrete event simulation (DES) model of Constellation operations was being built through the joint efforts of Productivity Apex Inc. (PAl) and Science Applications International Corporation (SAIC) teams under the guidance of NASA. The uncertainty in regards to the Constellation program presented a major challenge to the DES team, as to: continue the development of this program-of-record simulation, while at the same time remain prepared for possible changes to the program. This required the team to rethink how it would develop it's model and make it flexible enough to support possible future vehicles while at the same time be specific enough to support the program-of-record. This challenge was compounded by the fact that this model was being developed through the traditional DES process-orientation which lacked the flexibility of object-oriented approaches. The team met this challenge through significant pre-planning that led to the "modularization" of the model's structure by identifying what was generic, finding natural logic break points, and the standardization of interlogic numbering system. The outcome of this work resulted in a model that not only was ready to be easily modified to support any future rocket programs, but also a model that was extremely structured and organized in a way that facilitated rapid verification. This paper discusses in detail the process the team followed to build this model and the many advantages this method provides builders of traditional process-oriented discrete

  14. Comparative Effectiveness of Tacrolimus-Based Steroid Sparing versus Steroid Maintenance Regimens in Kidney Transplantation: Results from Discrete Event Simulation.

    Science.gov (United States)

    Desai, Vibha C A; Ferrand, Yann; Cavanaugh, Teresa M; Kelton, Christina M L; Caro, J Jaime; Goebel, Jens; Heaton, Pamela C

    2017-10-01

    Corticosteroids used as immunosuppressants to prevent acute rejection (AR) and graft loss (GL) following kidney transplantation are associated with serious cardiovascular and other adverse events. Evidence from short-term randomized controlled trials suggests that many patients on a tacrolimus-based immunosuppressant regimen can withdraw from steroids without increased AR or GL risk. To measure the long-term tradeoff between GL and adverse events for a heterogeneous-risk population and determine the optimal timing of steroid withdrawal. A discrete event simulation was developed including, as events, AR, GL, myocardial infarction (MI), stroke, cytomegalovirus, and new onset diabetes mellitus (NODM), among others. Data from the United States Renal Data System were used to estimate event-specific parametric regressions, which accounted for steroid-sparing regimen (avoidance, early 7-d withdrawal, 6-mo withdrawal, 12-mo withdrawal, and maintenance) as well as patients' demographics, immunologic risks, and comorbidities. Regression-equation results were used to derive individual time-to-event Weibull distributions, used, in turn, to simulate the course of patients over 20 y. Patients on steroid avoidance or an early-withdrawal regimen were more likely to experience AR (45.9% to 55.0% v. 33.6%, P events and other outcomes with no worsening of AR or GL rates compared with steroid maintenance.

  15. Discrete-event simulation of coordinated multi-point joint transmission in LTE-Advanced with constrained backhaul

    DEFF Research Database (Denmark)

    Artuso, Matteo; Christiansen, Henrik Lehrmann

    2014-01-01

    Inter-cell interference in LTE-Advanced can be mitigated using coordinated multi-point (CoMP) techniques with joint transmission of user data . However, this requires tight coordination of the eNodeBs, usin g the X2 interface. In this paper we use discrete-event simulation to evaluate the latency...... requirements for the X2 interface and investigate the consequences of a constrained ba ckhaul. Our simulation results show a gain of the system throug hput of up to 120% compared to the case without CoMP for low-latency backhaul. With X2 latencies above 5 ms CoMP is no longer a benefit to the network....

  16. Event-triggered decentralized adaptive fault-tolerant control of uncertain interconnected nonlinear systems with actuator failures.

    Science.gov (United States)

    Choi, Yun Ho; Yoo, Sung Jin

    2018-06-01

    This paper investigates the event-triggered decentralized adaptive tracking problem of a class of uncertain interconnected nonlinear systems with unexpected actuator failures. It is assumed that local control signals are transmitted to local actuators with time-varying faults whenever predefined conditions for triggering events are satisfied. Compared with the existing control-input-based event-triggering strategy for adaptive control of uncertain nonlinear systems, the aim of this paper is to propose a tracking-error-based event-triggering strategy in the decentralized adaptive fault-tolerant tracking framework. The proposed approach can relax drastic changes in control inputs caused by actuator faults in the existing triggering strategy. The stability of the proposed event-triggering control system is analyzed in the Lyapunov sense. Finally, simulation comparisons of the proposed and existing approaches are provided to show the effectiveness of the proposed theoretical result in the presence of actuator faults. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Two-dimensional numerical simulation of the effect of single event burnout for n-channel VDMOSFET

    International Nuclear Information System (INIS)

    Guo Hongxia; Chen Yusheng; Wang Wei; Zhao Jinlong; Zhang Yimen; Zhou Hui

    2004-01-01

    2D MEDICI simulator is used to investigate the effect of Single Event Burnout (SEB) for n-channel power VDMOSFETs. The simulation results are consistent with experimental results which have been published. The simulation results are of great interest for a better understanding of the occurrence of events. The effects of the minority carrier lifetime in the base region, the base width and the emitter doping density on SEB susceptibility are verified. Some hardening solutions to SEB are provided. The work shows that the 2D simulator MEDICI is an useful tool for burnout prediction and for the evaluation of hardening solutions. (authors)

  18. Comparison of discrete event simulation tools in an academic environment

    Directory of Open Access Journals (Sweden)

    Mario Jadrić

    2014-12-01

    Full Text Available A new research model for simulation software evaluation is proposed consisting of three main categories of criteria: modeling and simulation capabilities of the explored tools, and tools’ input/output analysis possibilities, all with respective sub-criteria. Using the presented model, two discrete event simulation tools are evaluated in detail using the task-centred scenario. Both tools (Arena and ExtendSim were used for teaching discrete event simulation in preceding academic years. With the aim to inspect their effectiveness and to help us determine which tool is more suitable for students i.e. academic purposes, we used a simple simulation model of entities competing for limited resources. The main goal was to measure subjective (primarily attitude and objective indicators while using the tools when the same simulation scenario is given. The subjects were first year students of Master studies in Information Management at the Faculty of Economics in Split taking a course in Business Process Simulations (BPS. In a controlled environment – in a computer lab, two groups of students were given detailed, step-by-step instructions for building models using both tools - first using ExtendSim then Arena or vice versa. Subjective indicators (students’ attitudes were collected using an online survey completed immediately upon building each model. Subjective indicators primarily include students’ personal estimations of Arena and ExtendSim capabilities/features for model building, model simulation and result analysis. Objective indicators were measured using specialised software that logs information on user's behavior while performing a particular task on their computer such as distance crossed by mouse during model building, the number of mouse clicks, usage of the mouse wheel and speed achieved. The results indicate that ExtendSim is well preferred comparing to Arena with regards to subjective indicators while the objective indicators are

  19. An Event-Driven Hybrid Molecular Dynamics and Direct Simulation Monte Carlo Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Donev, A; Garcia, A L; Alder, B J

    2007-07-30

    A novel algorithm is developed for the simulation of polymer chains suspended in a solvent. The polymers are represented as chains of hard spheres tethered by square wells and interact with the solvent particles with hard core potentials. The algorithm uses event-driven molecular dynamics (MD) for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in event-driven algorithms, rather, the momentum and energy exchange in the solvent is determined stochastically using the Direct Simulation Monte Carlo (DSMC) method. The coupling between the solvent and the solute is consistently represented at the particle level, however, unlike full MD simulations of both the solvent and the solute, the spatial structure of the solvent is ignored. The algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard wall subjected to uniform shear. The algorithm closely reproduces full MD simulations with two orders of magnitude greater efficiency. Results do not confirm the existence of periodic (cycling) motion of the polymer chain.

  20. Distributed Event-Triggered Control of Multiagent Systems with Time-Varying Topology

    Directory of Open Access Journals (Sweden)

    Jingwei Ma

    2014-01-01

    Full Text Available This paper studies the consensus of first-order discrete-time multiagent systems, where the interaction topology is time-varying. The event-triggered control is used to update the control input of each agent, and the event-triggering condition is designed based on the combination of the relative states of each agent to its neighbors. By applying the common Lyapunov function method, a sufficient condition for consensus, which is expressed as a group of linear matrix inequalities, is obtained and the feasibility of these linear matrix inequalities is further analyzed. Simulation examples are provided to explain the effectiveness of the theoretical results.

  1. World, We Have Problems: Simulation for Large Complex, Risky Projects, and Events

    Science.gov (United States)

    Elfrey, Priscilla

    2010-01-01

    Prior to a spacewalk during the NASA STS/129 mission in November 2009, Columbia Broadcasting System (CBS) correspondent William Harwood reported astronauts, "were awakened again", as they had been the day previously. Fearing something not properly connected was causing a leak, the crew, both on the ground and in space, stopped and checked everything. The alarm proved false. The crew did complete its work ahead of schedule, but the incident reminds us that correctly connecting hundreds and thousands of entities, subsystems and systems, finding leaks, loosening stuck valves, and adding replacements to very large complex systems over time does not occur magically. Everywhere major projects present similar pressures. Lives are at - risk. Responsibility is heavy. Large natural and human-created disasters introduce parallel difficulties as people work across boundaries their countries, disciplines, languages, and cultures with known immediate dangers as well as the unexpected. NASA has long accepted that when humans have to go where humans cannot go that simulation is the sole solution. The Agency uses simulation to achieve consensus, reduce ambiguity and uncertainty, understand problems, make decisions, support design, do planning and troubleshooting, as well as for operations, training, testing, and evaluation. Simulation is at the heart of all such complex systems, products, projects, programs, and events. Difficult, hazardous short and, especially, long-term activities have a persistent need for simulation from the first insight into a possibly workable idea or answer until the final report perhaps beyond our lifetime is put in the archive. With simulation we create a common mental model, try-out breakdowns of machinery or teamwork, and find opportunity for improvement. Lifecycle simulation proves to be increasingly important as risks and consequences intensify. Across the world, disasters are increasing. We anticipate more of them, as the results of global warming

  2. Efficient rare-event simulation for multiple jump events in regularly varying random walks and compound Poisson processes

    NARCIS (Netherlands)

    B. Chen (Bohan); J. Blanchet; C.H. Rhee (Chang-Han); A.P. Zwart (Bert)

    2017-01-01

    textabstractWe propose a class of strongly efficient rare event simulation estimators for random walks and compound Poisson processes with a regularly varying increment/jump-size distribution in a general large deviations regime. Our estimator is based on an importance sampling strategy that hinges

  3. Can discrete event simulation be of use in modelling major depression?

    Science.gov (United States)

    Le Lay, Agathe; Despiegel, Nicolas; François, Clément; Duru, Gérard

    2006-12-05

    Depression is among the major contributors to worldwide disease burden and adequate modelling requires a framework designed to depict real world disease progression as well as its economic implications as closely as possible. In light of the specific characteristics associated with depression (multiple episodes at varying intervals, impact of disease history on course of illness, sociodemographic factors), our aim was to clarify to what extent "Discrete Event Simulation" (DES) models provide methodological benefits in depicting disease evolution. We conducted a comprehensive review of published Markov models in depression and identified potential limits to their methodology. A model based on DES principles was developed to investigate the benefits and drawbacks of this simulation method compared with Markov modelling techniques. The major drawback to Markov models is that they may not be suitable to tracking patients' disease history properly, unless the analyst defines multiple health states, which may lead to intractable situations. They are also too rigid to take into consideration multiple patient-specific sociodemographic characteristics in a single model. To do so would also require defining multiple health states which would render the analysis entirely too complex. We show that DES resolve these weaknesses and that its flexibility allow patients with differing attributes to move from one event to another in sequential order while simultaneously taking into account important risk factors such as age, gender, disease history and patients attitude towards treatment, together with any disease-related events (adverse events, suicide attempt etc.). DES modelling appears to be an accurate, flexible and comprehensive means of depicting disease progression compared with conventional simulation methodologies. Its use in analysing recurrent and chronic diseases appears particularly useful compared with Markov processes.

  4. Using soft systems methodology to develop a simulation of out-patient services.

    Science.gov (United States)

    Lehaney, B; Paul, R J

    1994-10-01

    Discrete event simulation is an approach to modelling a system in the form of a set of mathematical equations and logical relationships, usually used for complex problems, which are difficult to address by using analytical or numerical methods. Managing out-patient services is such a problem. However, simulation is not in itself a systemic approach, in that it provides no methodology by which system boundaries and system activities may be identified. The investigation considers the use of soft systems methodology as an aid to drawing system boundaries and identifying system activities, for the purpose of simulating the outpatients' department at a local hospital. The long term aims are to examine the effects that the participative nature of soft systems methodology has on the acceptability of the simulation model, and to provide analysts and managers with a process that may assist in planning strategies for health care.

  5. Molecular Simulation of Reacting Systems; TOPICAL

    International Nuclear Information System (INIS)

    THOMPSON, AIDAN P.

    2002-01-01

    The final report for a Laboratory Directed Research and Development project entitled, Molecular Simulation of Reacting Systems is presented. It describes efforts to incorporate chemical reaction events into the LAMMPS massively parallel molecular dynamics code. This was accomplished using a scheme in which several classes of reactions are allowed to occur in a probabilistic fashion at specified times during the MD simulation. Three classes of reaction were implemented: addition, chain transfer and scission. A fully parallel implementation was achieved using a checkerboarding scheme, which avoids conflicts due to reactions occurring on neighboring processors. The observed chemical evolution is independent of the number of processors used. The code was applied to two test applications: irreversible linear polymerization and thermal degradation chemistry

  6. Event storm detection and identification in communication systems

    International Nuclear Information System (INIS)

    Albaghdadi, Mouayad; Briley, Bruce; Evens, Martha

    2006-01-01

    Event storms are the manifestation of an important class of abnormal behaviors in communication systems. They occur when a large number of nodes throughout the system generate a set of events within a small period of time. It is essential for network management systems to detect every event storm and identify its cause, in order to prevent and repair potential system faults. This paper presents a set of techniques for the effective detection and identification of event storms in communication systems. First, we introduce a new algorithm to synchronize events to a single node in the system. Second, the system's event log is modeled as a normally distributed random process. This is achieved by using data analysis techniques to explore and then model the statistical behavior of the event log. Third, event storm detection is proposed using a simple test statistic combined with an exponential smoothing technique to overcome the non-stationary behavior of event logs. Fourth, the system is divided into non-overlapping regions to locate the main contributing regions of a storm. We show that this technique provides us with a method for event storm identification. Finally, experimental results from a commercially deployed multimedia communication system that uses these techniques demonstrate their effectiveness

  7. Discrete-Event Simulation Unmasks the Quantum Cheshire Cat

    Science.gov (United States)

    Michielsen, Kristel; Lippert, Thomas; Raedt, Hans De

    2017-05-01

    It is shown that discrete-event simulation accurately reproduces the experimental data of a single-neutron interferometry experiment [T. Denkmayr {\\sl et al.}, Nat. Commun. 5, 4492 (2014)] and provides a logically consistent, paradox-free, cause-and-effect explanation of the quantum Cheshire cat effect without invoking the notion that the neutron and its magnetic moment separate. Describing the experimental neutron data using weak-measurement theory is shown to be useless for unravelling the quantum Cheshire cat effect.

  8. Discrete event model-based simulation for train movement on a single-line railway

    International Nuclear Information System (INIS)

    Xu Xiao-Ming; Li Ke-Ping; Yang Li-Xing

    2014-01-01

    The aim of this paper is to present a discrete event model-based approach to simulate train movement with the considered energy-saving factor. We conduct extensive case studies to show the dynamic characteristics of the traffic flow and demonstrate the effectiveness of the proposed approach. The simulation results indicate that the proposed discrete event model-based simulation approach is suitable for characterizing the movements of a group of trains on a single railway line with less iterations and CPU time. Additionally, some other qualitative and quantitative characteristics are investigated. In particular, because of the cumulative influence from the previous trains, the following trains should be accelerated or braked frequently to control the headway distance, leading to more energy consumption. (general)

  9. Interferences and events on epistemic shifts in physics through computer simulations

    CERN Document Server

    Warnke, Martin

    2017-01-01

    Computer simulations are omnipresent media in today's knowledge production. For scientific endeavors such as the detection of gravitational waves and the exploration of subatomic worlds, simulations are essential; however, the epistemic status of computer simulations is rather controversial as they are neither just theory nor just experiment. Therefore, computer simulations have challenged well-established insights and common scientific practices as well as our very understanding of knowledge. This volume contributes to the ongoing discussion on the epistemic position of computer simulations in a variety of physical disciplines, such as quantum optics, quantum mechanics, and computational physics. Originating from an interdisciplinary event, it shows that accounts of contemporary physics can constructively interfere with media theory, philosophy, and the history of science.

  10. Fast simulation of the trigger system of the ATLAS detector at LHC

    International Nuclear Information System (INIS)

    Epp, B.; Ghete, V.M.; Kuhn, D.; Zhang, Y.J.

    2004-01-01

    The trigger system of the ATLAS detector aims to maximize the physics coverage and to be open to new and possibly unforeseen physics signatures. It is a multi-level system, composed from a hardware trigger at level-1, followed by the high-level-trigger (level-2 and event-filter). In order to understand its performance, to optimize it and to reduce its total cost, the trigger system requires a detailed simulation which is time- and resource-consuming. An alternative to the full detector simulation is a so-called 'fast simulation' which starts the analysis from particle level and replaces the full detector simulation and the detailed particle tracking with parametrized distributions obtained from the full simulation and/or a simplified detector geometry. The fast simulation offers a less precise description of trigger performance, but it is faster and less resource-consuming. (author)

  11. Rare event simulation for dynamic fault trees

    NARCIS (Netherlands)

    Ruijters, Enno Jozef Johannes; Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Stoelinga, Mariëlle Ida Antoinette

    2017-01-01

    Fault trees (FT) are a popular industrial method for reliability engineering, for which Monte Carlo simulation is an important technique to estimate common dependability metrics, such as the system reliability and availability. A severe drawback of Monte Carlo simulation is that the number of

  12. Rare Event Simulation for Dynamic Fault Trees

    NARCIS (Netherlands)

    Ruijters, Enno Jozef Johannes; Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Stoelinga, Mariëlle Ida Antoinette; Tonetta, Stefano; Schoitsch, Erwin; Bitsch, Friedemann

    2017-01-01

    Fault trees (FT) are a popular industrial method for reliability engineering, for which Monte Carlo simulation is an important technique to estimate common dependability metrics, such as the system reliability and availability. A severe drawback of Monte Carlo simulation is that the number of

  13. A community-based event delivery protocol in publish/subscribe systems for delay tolerant sensor networks.

    Science.gov (United States)

    Liu, Nianbo; Liu, Ming; Zhu, Jinqi; Gong, Haigang

    2009-01-01

    The basic operation of a Delay Tolerant Sensor Network (DTSN) is to finish pervasive data gathering in networks with intermittent connectivity, while the publish/subscribe (Pub/Sub for short) paradigm is used to deliver events from a source to interested clients in an asynchronous way. Recently, extension of Pub/Sub systems in DTSNs has become a promising research topic. However, due to the unique frequent partitioning characteristic of DTSNs, extension of a Pub/Sub system in a DTSN is a considerably difficult and challenging problem, and there are no good solutions to this problem in published works. To ad apt Pub/Sub systems to DTSNs, we propose CED, a community-based event delivery protocol. In our design, event delivery is based on several unchanged communities, which are formed by sensor nodes in the network according to their connectivity. CED consists of two components: event delivery and queue management. In event delivery, events in a community are delivered to mobile subscribers once a subscriber comes into the community, for improving the data delivery ratio. The queue management employs both the event successful delivery time and the event survival time to decide whether an event should be delivered or dropped for minimizing the transmission overhead. The effectiveness of CED is demonstrated through comprehensive simulation studies.

  14. Simulating single-event burnout of n-channel power MOSFET's

    International Nuclear Information System (INIS)

    Johnson, G.H.; Hohl, J.H.; Schrimpf, R.D.; Galloway, K.F.

    1993-01-01

    Heavy ions are ubiquitous in a space environment. Single-event burnout of power MOSFET's is a sudden catastrophic failure mechanism that is initiated by the passage of a heavy ion through the device structure. The passage of the heavy ion generates a current filament that locally turns on a parasitic n-p-n transistor inherent to the power MOSFET. Subsequent high currents and high voltage in the device induce second breakdown of the parasitic bipolar transistor and hence meltdown of the device. This paper presents a model that can be used for simulating the burnout mechanism in order to gain insight into the significant device parameters that most influence the single-event burnout susceptibility of n-channel power MOSFET's

  15. Non-fragile ?-? control for discrete-time stochastic nonlinear systems under event-triggered protocols

    Science.gov (United States)

    Sun, Ying; Ding, Derui; Zhang, Sunjie; Wei, Guoliang; Liu, Hongjian

    2018-07-01

    In this paper, the non-fragile ?-? control problem is investigated for a class of discrete-time stochastic nonlinear systems under event-triggered communication protocols, which determine whether the measurement output should be transmitted to the controller or not. The main purpose of the addressed problem is to design an event-based output feedback controller subject to gain variations guaranteeing the prescribed disturbance attenuation level described by the ?-? performance index. By utilizing the Lyapunov stability theory combined with S-procedure, a sufficient condition is established to guarantee both the exponential mean-square stability and the ?-? performance for the closed-loop system. In addition, with the help of the orthogonal decomposition, the desired controller parameter is obtained in terms of the solution to certain linear matrix inequalities. Finally, a simulation example is exploited to demonstrate the effectiveness of the proposed event-based controller design scheme.

  16. RANA, a real-time multi-agent system simulator

    DEFF Research Database (Denmark)

    Jørgensen, Søren Vissing; Demazeau, Yves; Hallam, John

    2016-01-01

    for individualisation and abstraction while retaining efficiency. Events are managed by the C++ simulator core. Full run state can be recorded for post-processed visualisation or analysis. The new tool is demonstrated in three different cases: a mining robot simulation, which is purely action based; an agent......-based setup that is verifies the high precision exhibited by RANAs simulation core; and a state-based firefly-like agent simulation that models real-time responses to fellow agents' signals, in which event propagation and reception affect the result of the simulation....

  17. Simulation and event reconstruction inside the PandaRoot framework

    International Nuclear Information System (INIS)

    Spataro, S

    2008-01-01

    The PANDA detector will be located at the future GSI accelerator FAIR. Its primary objective is the investigation of strong interaction with anti-proton beams, in the range up to 15 GeV/c as momentum of the incoming anti-proton. The PANDA offline simulation framework is called 'PandaRoot', as it is based upon the ROOT 5.14 package. It is characterized by a high versatility; it allows to perform simulation and analysis, to run different event generators (EvtGen, Pluto, UrQmd), different transport models (Geant3, Geant4, Fluka) with the same code, thus to compare the results simply by changing few macro lines without recompiling at all. Moreover auto-configuration scripts allow installing the full framework easily in different Linux distributions and with different compilers (the framework was installed and tested in more than 10 Linux platforms) without further manipulation. The final data are in a tree format, easily accessible and readable through simple clicks on the root browsers. The presentation will report on the actual status of the computing development inside the PandaRoot framework, in terms of detector implementation and event reconstruction

  18. Quantitative Simulation of QARBM Challenge Events During Radiation Belt Enhancements

    Science.gov (United States)

    Li, W.; Ma, Q.; Thorne, R. M.; Bortnik, J.; Chu, X.

    2017-12-01

    Various physical processes are known to affect energetic electron dynamics in the Earth's radiation belts, but their quantitative effects at different times and locations in space need further investigation. This presentation focuses on discussing the quantitative roles of various physical processes that affect Earth's radiation belt electron dynamics during radiation belt enhancement challenge events (storm-time vs. non-storm-time) selected by the GEM Quantitative Assessment of Radiation Belt Modeling (QARBM) focus group. We construct realistic global distributions of whistler-mode chorus waves, adopt various versions of radial diffusion models (statistical and event-specific), and use the global evolution of other potentially important plasma waves including plasmaspheric hiss, magnetosonic waves, and electromagnetic ion cyclotron waves from all available multi-satellite measurements. These state-of-the-art wave properties and distributions on a global scale are used to calculate diffusion coefficients, that are then adopted as inputs to simulate the dynamical electron evolution using a 3D diffusion simulation during the storm-time and the non-storm-time acceleration events respectively. We explore the similarities and differences in the dominant physical processes that cause radiation belt electron dynamics during the storm-time and non-storm-time acceleration events. The quantitative role of each physical process is determined by comparing against the Van Allen Probes electron observations at different energies, pitch angles, and L-MLT regions. This quantitative comparison further indicates instances when quasilinear theory is sufficient to explain the observed electron dynamics or when nonlinear interaction is required to reproduce the energetic electron evolution observed by the Van Allen Probes.

  19. Can discrete event simulation be of use in modelling major depression?

    Directory of Open Access Journals (Sweden)

    François Clément

    2006-12-01

    Full Text Available Abstract Background Depression is among the major contributors to worldwide disease burden and adequate modelling requires a framework designed to depict real world disease progression as well as its economic implications as closely as possible. Objectives In light of the specific characteristics associated with depression (multiple episodes at varying intervals, impact of disease history on course of illness, sociodemographic factors, our aim was to clarify to what extent "Discrete Event Simulation" (DES models provide methodological benefits in depicting disease evolution. Methods We conducted a comprehensive review of published Markov models in depression and identified potential limits to their methodology. A model based on DES principles was developed to investigate the benefits and drawbacks of this simulation method compared with Markov modelling techniques. Results The major drawback to Markov models is that they may not be suitable to tracking patients' disease history properly, unless the analyst defines multiple health states, which may lead to intractable situations. They are also too rigid to take into consideration multiple patient-specific sociodemographic characteristics in a single model. To do so would also require defining multiple health states which would render the analysis entirely too complex. We show that DES resolve these weaknesses and that its flexibility allow patients with differing attributes to move from one event to another in sequential order while simultaneously taking into account important risk factors such as age, gender, disease history and patients attitude towards treatment, together with any disease-related events (adverse events, suicide attempt etc.. Conclusion DES modelling appears to be an accurate, flexible and comprehensive means of depicting disease progression compared with conventional simulation methodologies. Its use in analysing recurrent and chronic diseases appears particularly useful

  20. Simulation modeling and analysis with Arena

    CERN Document Server

    Altiok, Tayfur

    2007-01-01

    Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment.” It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...

  1. The Skateboard Factory: a teaching case on discrete-event simulation

    Directory of Open Access Journals (Sweden)

    Marco Aurélio de Mesquita

    Full Text Available Abstract Real-life applications during the teaching process are a desirable practice in simulation education. However, access to real cases imposes some difficulty in implement such practice, especially when the classes are large. This paper presents a teaching case for a computer simulation course in a production engineering undergraduate program. The motivation for the teaching case was to provide students with a realistic manufacturing case to stimulate the learning of simulation concepts and methods in the context of industrial engineering. The case considers a virtual factory of skateboards, which operations include parts manufacturing, final assembly and storage of raw materials, work-in-process and finished products. Students should model and simulate the factory, under push and pull production strategies, using any simulation software available in the laboratory. The teaching case, applied in the last two years, contributed to motivate and consolidate the students’ learning of discrete-event simulation. It proved to be a feasible alternative to the previous practice of letting students freely choose a case for their final project, while keeping the essence of project-based learning approach.

  2. Simulation based analysis and an application to an offshore oil and gas production system of the Natvig measures of component importance in repairable systems

    International Nuclear Information System (INIS)

    Natvig, Bent; Eide, Kristina A.; Gasemyr, Jorund; Huseby, Arne B.; Isaksen, Stefan L.

    2009-01-01

    In the present paper the Natvig measures of component importance for repairable systems, and its extended version are analyzed for two three-component systems and a bridge system. The measures are also applied to an offshore oil and gas production system. According to the extended version of the Natvig measure a component is important if both by failing it strongly reduces the expected system uptime and by being repaired it strongly reduces the expected system downtime. The results include a study of how different distributions affect the ranking of the components. All numerical results are computed using discrete event simulation. In a companion paper [Huseby AB, Eide KA, Isaksen SL, Natvig B, Gasemyr, J. Advanced discrete event simulation methods with application to importance measure estimation. 2009, submitted for publication] the advanced simulation methods needed in these calculations are described.

  3. A software Event Summation System for MDSplus

    International Nuclear Information System (INIS)

    Davis, W.M.; Mastrovito, D.M.; Roney, P.G.; Sichta, P.

    2008-01-01

    The MDSplus data acquisition and management system uses software events for communication among interdependent processes anywhere on the network. Actions can then be triggered, such as a data-acquisition routine, or analysis or display programs waiting for data. A small amount of data, such as a shot number, can be passed with these events. Since programs sometimes need more than one data set, we developed a system on NSTX to declare composite events using logical AND and OR operations. The system is written in the IDL language, so it can be run on Linux, Macintosh or Windows platforms. Like MDSplus, the Experimental Physics and Industrial Control System (EPICS) is a core component of the NSTX software environment. The Event Summation System provides an IDL-based interface to EPICS. This permits EPICS-aware processes to be synchronized with MDSplus-aware processes, to provide, for example, engineering operators information about physics data acquisition and analysis. Reliability was a more important design consideration than performance for this system; the system's architecture includes features to support this. The system has run for weeks at a time without requiring manual intervention. Hundreds of incoming events per second can be handled reliably. All incoming and declared events are logged with a timestamp. The system can be configured easily through a single, easy-to-read text file

  4. DeMO: An Ontology for Discrete-event Modeling and Simulation

    Science.gov (United States)

    Silver, Gregory A; Miller, John A; Hybinette, Maria; Baramidze, Gregory; York, William S

    2011-01-01

    Several fields have created ontologies for their subdomains. For example, the biological sciences have developed extensive ontologies such as the Gene Ontology, which is considered a great success. Ontologies could provide similar advantages to the Modeling and Simulation community. They provide a way to establish common vocabularies and capture knowledge about a particular domain with community-wide agreement. Ontologies can support significantly improved (semantic) search and browsing, integration of heterogeneous information sources, and improved knowledge discovery capabilities. This paper discusses the design and development of an ontology for Modeling and Simulation called the Discrete-event Modeling Ontology (DeMO), and it presents prototype applications that demonstrate various uses and benefits that such an ontology may provide to the Modeling and Simulation community. PMID:22919114

  5. Analysis of the Steam Generator Tubes Rupture Initiating Event

    International Nuclear Information System (INIS)

    Trillo, A.; Minguez, E.; Munoz, R.; Melendez, E.; Sanchez-Perea, M.; Izquierd, J.M.

    1998-01-01

    In PSA studies, Event Tree-Fault Tree techniques are used to analyse to consequences associated with the evolution of an initiating event. The Event Tree is built in the sequence identification stage, following the expected behaviour of the plant in a qualitative way. Computer simulation of the sequences is performed mainly to determine the allowed time for operator actions, and do not play a central role in ET validation. The simulation of the sequence evolution can instead be performed by using standard tools, helping the analyst obtain a more realistic ET. Long existing methods and tools can be used to automatism the construction of the event tree associated to a given initiator. These methods automatically construct the ET by simulating the plant behaviour following the initiator, allowing some of the systems to fail during the sequence evolution. Then, the sequences with and without the failure are followed. The outcome of all this is a Dynamic Event Tree. The work described here is the application of one such method to the particular case of the SGTR initiating event. The DYLAM scheduler, designed at the Ispra (Italy) JRC of the European Communities, is used to automatically drive the simulation of all the sequences constituting the Event Tree. Similarly to the static Event Tree, each time a system is demanded, two branches are open: one corresponding to the success and the other to the failure of the system. Both branches are followed by the plant simulator until a new system is demanded, and the process repeats. The plant simulation modelling allows the treatment of degraded sequences that enter into the severe accident domain as well as of success sequences in which long-term cooling is started. (Author)

  6. SAFTAC, Monte-Carlo Fault Tree Simulation for System Design Performance and Optimization

    International Nuclear Information System (INIS)

    Crosetti, P.A.; Garcia de Viedma, L.

    1976-01-01

    1 - Description of problem or function: SAFTAC is a Monte Carlo fault tree simulation program that provides a systematic approach for analyzing system design, performing trade-off studies, and optimizing system changes or additions. 2 - Method of solution: SAFTAC assumes an exponential failure distribution for basic input events and a choice of either Gaussian distributed or constant repair times. The program views the system represented by the fault tree as a statistical assembly of independent basic input events, each characterized by an exponential failure distribution and, if used, a constant or normal repair distribution. 3 - Restrictions on the complexity of the problem: The program is dimensioned to handle 1100 basic input events and 1100 logical gates. It can be re-dimensioned to handle up to 2000 basic input events and 2000 logical gates within the existing core memory

  7. Interactive Dynamic-System Simulation

    CERN Document Server

    Korn, Granino A

    2010-01-01

    Showing you how to use personal computers for modeling and simulation, Interactive Dynamic-System Simulation, Second Edition provides a practical tutorial on interactive dynamic-system modeling and simulation. It discusses how to effectively simulate dynamical systems, such as aerospace vehicles, power plants, chemical processes, control systems, and physiological systems. Written by a pioneer in simulation, the book introduces dynamic-system models and explains how software for solving differential equations works. After demonstrating real simulation programs with simple examples, the author

  8. Numerical simulations of an advection fog event over Shanghai Pudong International Airport with the WRF model

    Science.gov (United States)

    Lin, Caiyan; Zhang, Zhongfeng; Pu, Zhaoxia; Wang, Fengyun

    2017-10-01

    A series of numerical simulations is conducted to understand the formation, evolution, and dissipation of an advection fog event over Shanghai Pudong International Airport (ZSPD) with the Weather Research and Forecasting (WRF) model. Using the current operational settings at the Meteorological Center of East China Air Traffic Management Bureau, the WRF model successfully predicts the fog event at ZSPD. Additional numerical experiments are performed to examine the physical processes associated with the fog event. The results indicate that prediction of this particular fog event is sensitive to microphysical schemes for the time of fog dissipation but not for the time of fog onset. The simulated timing of the arrival and dissipation of the fog, as well as the cloud distribution, is substantially sensitive to the planetary boundary layer and radiation (both longwave and shortwave) processes. Moreover, varying forecast lead times also produces different simulation results for the fog event regarding its onset and duration, suggesting a trade-off between more accurate initial conditions and a proper forecast lead time that allows model physical processes to spin up adequately during the fog simulation. The overall outcomes from this study imply that the complexity of physical processes and their interactions within the WRF model during fog evolution and dissipation is a key area of future research.

  9. An Oracle-based Event Index for ATLAS

    CERN Document Server

    Gallas, Elizabeth; The ATLAS collaboration; Petrova, Petya Tsvetanova; Baranowski, Zbigniew; Canali, Luca; Formica, Andrea; Dumitru, Andrei

    2016-01-01

    The ATLAS EventIndex System has amassed a set of key quantities for a large number of ATLAS events into a Hadoop based infrastructure for the purpose of providing the experiment with a number of event-wise services. Collecting this data in one place provides the opportunity to investigate various storage formats and technologies and assess which best serve the various use cases as well as consider what other benefits alternative storage systems provide. In this presentation we describe how the data are imported into an Oracle RDBMS, the services we have built based on this architecture, and our experience with it. We've indexed about 15 billion real data events and about 25 billion simulated events thus far and have designed the system to accommodate future data which has expected rates of 5 and 20 billion events per year for real data and simulation, respectively. We have found this system offers outstanding performance for some fundamental use cases. In addition, profiting from the co-location of this data ...

  10. Discrete Event Modeling and Simulation-Driven Engineering for the ATLAS Data Acquisition Network

    CERN Document Server

    Bonaventura, Matias Alejandro; The ATLAS collaboration; Castro, Rodrigo Daniel

    2016-01-01

    We present an iterative and incremental development methodology for simulation models in network engineering projects. Driven by the DEVS (Discrete Event Systems Specification) formal framework for modeling and simulation we assist network design, test, analysis and optimization processes. A practical application of the methodology is presented for a case study in the ATLAS particle physics detector, the largest scientific experiment built by man where scientists around the globe search for answers about the origins of the universe. The ATLAS data network convey real-time information produced by physics detectors as beams of particles collide. The produced sub-atomic evidences must be filtered and recorded for further offline scrutiny. Due to the criticality of the transported data, networks and applications undergo careful engineering processes with stringent quality of service requirements. A tight project schedule imposes time pressure on design decisions, while rapid technology evolution widens the palett...

  11. A Community-Based Event Delivery Protocol in Publish/Subscribe Systems for Delay Tolerant Sensor Networks

    Directory of Open Access Journals (Sweden)

    Haigang Gong

    2009-09-01

    Full Text Available The basic operation of a Delay Tolerant Sensor Network (DTSN is to finish pervasive data gathering in networks with intermittent connectivity, while the publish/subscribe (Pub/Sub for short paradigm is used to deliver events from a source to interested clients in an asynchronous way. Recently, extension of Pub/Sub systems in DTSNs has become a promising research topic. However, due to the unique frequent partitioning characteristic of DTSNs, extension of a Pub/Sub system in a DTSN is a considerably difficult and challenging problem, and there are no good solutions to this problem in published works. To ad apt Pub/Sub systems to DTSNs, we propose CED, a community-based event delivery protocol. In our design, event delivery is based on several unchanged communities, which are formed by sensor nodes in the network according to their connectivity. CED consists of two components: event delivery and queue management. In event delivery, events in a community are delivered to mobile subscribers once a subscriber comes into the community, for improving the data delivery ratio. The queue management employs both the event successful delivery time and the event survival time to decide whether an event should be delivered or dropped for minimizing the transmission overhead. The effectiveness of CED is demonstrated through comprehensive simulation studies.

  12. Test system to simulate transient overpower LMFBR cladding failure

    International Nuclear Information System (INIS)

    Barrus, H.G.; Feigenbutz, L.V.

    1981-01-01

    One of the HEDL programs has the objective to experimentally characterize fuel pin cladding failure due to cladding rupture or ripping. A new test system has been developed which simulates a transient mechanically-loaded fuel pin failure. In this new system the mechanical load is prototypic of a fuel pellet rapidly expanding against the cladding due to various causes such as fuel thermal expansion, fuel melting, and fuel swelling. This new test system is called the Fuel Cladding Mechanical Interaction Mandrel Loading Test (FCMI/MLT). The FCMI/MLT test system and the method used to rupture cladding specimens very rapidly to simulate a transient event are described. Also described is the automatic data acquisition and control system which is required to control the startup, operation and shutdown of the very fast tests, and needed to acquire and store large quantities of data in a short time

  13. Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"

    Energy Technology Data Exchange (ETDEWEB)

    Petzold, Linda R.

    2012-10-25

    Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.

  14. Simulating Flaring Events via an Intelligent Cellular Automata Mechanism

    Science.gov (United States)

    Dimitropoulou, M.; Vlahos, L.; Isliker, H.; Georgoulis, M.

    2010-07-01

    We simulate flaring events through a Cellular Automaton (CA) model, in which, for the first time, we use observed vector magnetograms as initial conditions. After non-linear force free extrapolation of the magnetic field from the vector magnetograms, we identify magnetic discontinuities, using two alternative criteria: (1) the average magnetic field gradient, or (2) the normalized magnetic field curl (i.e. the current). Magnetic discontinuities are identified at the grid-sites where the magnetic field gradient or curl exceeds a specified threshold. We then relax the magnetic discontinuities according to the rules of Lu and Hamilton (1991) or Lu et al. (1993), i.e. we redistribute the magnetic field locally so that the discontinuities disappear. In order to simulate the flaring events, we consider several alternative scenarios with regard to: (1) The threshold above which magnetic discontinuities are identified (applying low, high, and height-dependent threshold values); (2) The driving process that occasionally causes new discontinuities (at randomly chosen grid sites, magnetic field increments are added that are perpendicular (or may-be also parallel) to the existing magnetic field). We address the question whether the coronal active region magnetic fields can indeed be considered to be in the state of self-organized criticality (SOC).

  15. submitter Simulation-Based Performance Analysis of the ALICE Mass Storage System

    CERN Document Server

    Vickovic, L; Celar, S

    2016-01-01

    CERN – the European Organization for Nuclear Research today, in the era of big data, is one of the biggest data generators in the world. Especially interesting is transient data storage system in the ALICE experiment. With the goal to optimize its performance this paper discusses a dynamic, discrete event simulation model of disk based Storage Area Network (SAN) and its usage for the performance analyses. Storage system model is based on modular, bottom up approach and the differences between measured and simulated values vary between 1.5 % and 4 % depending on the simulated component. Once finished, simulation model was used for detailed performance analyses. Among other findings it showed that system performances can be seriously affected if the array stripe size is larger than the size of cache on individual disks in the array, which so far has been completely ignored in the literature.

  16. Simulation of the Tornado Event of 22 March, 2013 over ...

    Indian Academy of Sciences (India)

    An attempt has been made to simulate this rare event using the Weather Research and Forecasting (WRF) model. The model was run in a single domain at 9 km resolution for a period of 24 hrs, starting at 0000 UTC on 22 March, 2013. The meteorological conditions that led to form this tornado have been analyzed.

  17. Discrete event dynamic system (DES)-based modeling for dynamic material flow in the pyroprocess

    International Nuclear Information System (INIS)

    Lee, Hyo Jik; Kim, Kiho; Kim, Ho Dong; Lee, Han Soo

    2011-01-01

    A modeling and simulation methodology was proposed in order to implement the dynamic material flow of the pyroprocess. Since the static mass balance provides the limited information on the material flow, it is hard to predict dynamic behavior according to event. Therefore, a discrete event system (DES)-based model named, PyroFlow, was developed at the Korea Atomic Energy Research Institute (KAERI). PyroFlow is able to calculate dynamic mass balance and also show various dynamic operational results in real time. By using PyroFlow, it is easy to rapidly predict unforeseeable results, such as throughput in unit process, accumulated product in buffer and operation status. As preliminary simulations, bottleneck analyses in the pyroprocess were carried out and consequently it was presented that operation strategy had influence on the productivity of the pyroprocess.

  18. High performance discrete event simulations to evaluate complex industrial systems, the case of automatic

    NARCIS (Netherlands)

    Hoekstra, A.G.; Dorst, L.; Bergman, M.; Lagerberg, J.; Visser, A.; Yakali, H.; Groen, F.; Hertzberger, L.O.

    1997-01-01

    We have developed a Modelling and Simulation platform for technical evaluation of Electronic Toll Collection on Motor Highways. This platform is used in a project of the Dutch government to assess the technical feasibility of Toll Collection systems proposed by industry. Motivated by this work we

  19. Managing bottlenecks in manual automobile assembly systems using discrete event simulation

    Directory of Open Access Journals (Sweden)

    Dewa, M.

    2013-08-01

    Full Text Available Batch model lines are quite handy when the demand for each product is moderate. However, they are characterised by high work-in-progress inventories, lost production time when changing over models, and reduced flexibility when it comes to altering production rates as product demand changes. On the other hand, mixed model lines can offer reduced work-in-progress inventory and increased flexibility. The object of this paper is to illustrate that a manual automobile assembling system can be optimised through managing bottlenecks by ensuring high workstation utilisation, reducing queue lengths before stations and reducing station downtime. A case study from the automobile industry is used for data collection. A model is developed through the use of simulation software. The model is then verified and validated before a detailed bottleneck analysis is conducted. An operational strategy is then proposed for optimal bottleneck management. Although the paper focuses on improving automobile assembly systems in batch mode, the methodology can also be applied in single model manual and automated production lines.

  20. Reduced herbivory during simulated ENSO rainy events increases native herbaceous plants in semiarid Chile

    NARCIS (Netherlands)

    Manrique, R.; Gutierrez, J.R.; Holmgren, M.; Squeo, F.A.

    2007-01-01

    El Niño Southern Oscillation (ENSO) events have profound consequences for the dynamics of terrestrial ecosystems. Since increased climate variability is expected to favour the invasive success of exotic species, we conducted a field experiment to study the effects that simulated rainy ENSO events in

  1. Discrete event systems diagnosis and diagnosability

    CERN Document Server

    Sayed-Mouchaweh, Moamar

    2014-01-01

    Discrete Event Systems: Diagnosis and Diagnosability addresses the problem of fault diagnosis of Discrete Event Systems (DES). This book provides the basic techniques and approaches necessary for the design of an efficient fault diagnosis system for a wide range of modern engineering applications. The different techniques and approaches are classified according to several criteria such as: modeling tools (Automata, Petri nets) that is used to construct the model; the information (qualitative based on events occurrences and/or states outputs, quantitative based on signal processing and data analysis) that is needed to analyze and achieve the diagnosis; the decision structure (centralized, decentralized) that is required to achieve the diagnosis. The goal of this classification is to select the efficient method to achieve the fault diagnosis according to the application constraints. This book focuses on the centralized and decentralized event based diagnosis approaches using formal language and automata as mode...

  2. Generalized Detectability for Discrete Event Systems

    Science.gov (United States)

    Shu, Shaolong; Lin, Feng

    2011-01-01

    In our previous work, we investigated detectability of discrete event systems, which is defined as the ability to determine the current and subsequent states of a system based on observation. For different applications, we defined four types of detectabilities: (weak) detectability, strong detectability, (weak) periodic detectability, and strong periodic detectability. In this paper, we extend our results in three aspects. (1) We extend detectability from deterministic systems to nondeterministic systems. Such a generalization is necessary because there are many systems that need to be modeled as nondeterministic discrete event systems. (2) We develop polynomial algorithms to check strong detectability. The previous algorithms are based on observer whose construction is of exponential complexity, while the new algorithms are based on a new automaton called detector. (3) We extend detectability to D-detectability. While detectability requires determining the exact state of a system, D-detectability relaxes this requirement by asking only to distinguish certain pairs of states. With these extensions, the theory on detectability of discrete event systems becomes more applicable in solving many practical problems. PMID:21691432

  3. Discrete event simulation for petroleum transfers involving harbors, refineries and pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Martins, Marcella S.R.; Lueders, Ricardo; Delgado, Myriam R.B.S. [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil)

    2009-07-01

    Nowadays a great effort has been spent by companies to improve their logistics in terms of programming of events that affect production and distribution of products. In this case, simulation can be a valuable tool for evaluating different behaviors. The objective of this work is to build a discrete event simulation model for scheduling of operational activities in complexes containing one harbor and two refineries interconnected by a pipeline infrastructure. The model was developed in Arena package, based on three sub-models that control pier allocation, loading of tanks, and transfers to refineries through pipelines. Preliminary results obtained for a given control policy, show that profit can be calculated by taking into account many parameters such as oil costs on ships, pier using, over-stay of ships and interface costs. Such problem has already been considered in the literature but using different strategies. All these factors should be considered in a real-world operation where decision making tools are necessary to obtain high returns. (author)

  4. Simulated global-scale response of the climate system to Dansgaard/Oeschger and Heinrich events

    Energy Technology Data Exchange (ETDEWEB)

    Claussen, M. [Potsdam Institute for Climate Impact Research, P-O Box 601203, 14412 Potsdam (Germany); Institute of Physics, Potsdam University, P-O Box 601553, 14415 Potsdam (Germany); Ganopolski, A.; Brovkin, V.; Gerstengarbe, F.W.; Werner, P. [Potsdam Institute for Climate Impact Research, P-O Box 601203, 14412 Potsdam (Germany)

    2003-11-01

    By using an Earth system model of intermediate complexity we have studied the global-scale response of the glacial climate system during marine isotope stage (MIS) 3 to perturbations at high northern latitudes and the tropics. These perturbations include changes in inland-ice volume over North America, in freshwater flux into the northern North Atlantic and in surface temperatures of the tropical Pacific. The global pattern of temperature series resulting from an experiment in which perturbations of inland ice and freshwater budget are imposed at high northern latitudes only, agree with paleoclimatic reconstructions. In particular, a positive correlation of temperature variations near Greenland and variations in all regions of the Northern Hemisphere and some parts of the southern tropics is found. Over the southern oceans a weak negative correlation appears which is strongest at a time lag of approximately 500 years. Further experimentation with prescribed temperature anomalies applied to the tropical Pacific suggests that perturbation of tropical sea-surface temperatures and hence, the tropical water cycle, is unlikely to have triggered Dansgaard/Oeschger (D/O) events. However, together with random freshwater anomalies prescribed at high northern latitudes, tropical perturbations would be able to synchronize the occurrence of D/O events via the mechanism of stochastic resonance. (orig.)

  5. Nova Event Logging System

    International Nuclear Information System (INIS)

    Calliger, R.J.; Suski, G.J.

    1981-01-01

    Nova is a 200 terawatt, 10-beam High Energy Glass Laser currently under construction at LLNL. This facility, designed to demonstrate the feasibility of laser driven inertial confinement fusion, contains over 5000 elements requiring coordinated control, data acquisition, and analysis functions. The large amounts of data that will be generated must be maintained over the life of the facility. Often the most useful but inaccessible data is that related to time dependent events associated with, for example, operator actions or experiment activity. We have developed an Event Logging System to synchronously record, maintain, and analyze, in part, this data. We see the system as being particularly useful to the physics and engineering staffs of medium and large facilities in that it is entirely separate from experimental apparatus and control devices. The design criteria, implementation, use, and benefits of such a system will be discussed

  6. Numerical simulation system for environmental studies: SPEEDI-MP

    International Nuclear Information System (INIS)

    Nagai, Haruyasu; Chino, Masamichi; Terada, Hiroaki; Harayama, Takaya; Kobayashi, Takuya; Tsuduki, Katsunori; Kim, Keyong-Ok; Furuno, Akiko

    2006-09-01

    A numerical simulation system SPEEDI-MP has been developed to apply for various environmental studies. SPEEDI-MP consists of dynamical models and material transport models for the atmospheric, terrestrial, and oceanic environments, meteorological and geographical database for model inputs, and system utilities for file management, visualization, analysis, etc., using graphical user interfaces (GUIs). As a numerical simulation tool, a model coupling program (model coupler) has been developed. It controls parallel calculations of several models and data exchanges among them to realize the dynamical coupling of the models. A coupled model system for water circulation has been constructed with atmosphere, ocean, wave, hydrology, and land-surface models using the model coupler. System utility GUIs are based on the Web technology, allowing users to manipulate all the functions on the system using their own PCs via the internet. In this system, the source estimation function in the atmospheric transport model can be executed on the grid computer system. Performance tests of the coupled model system for water circulation were also carried out for the flood event at Saudi Arabia in January 2005 and the storm surge case by the hurricane KATRINA in August 2005. (author)

  7. Decentralized event-triggered consensus control strategy for leader-follower networked systems

    Science.gov (United States)

    Zhang, Shouxu; Xie, Duosi; Yan, Weisheng

    2017-08-01

    In this paper, the consensus problem of leader-follower networked systems is addressed. At first, a centralized and a decentralized event-triggered control strategy are proposed, which make the control actuators of followers update at aperiodic invent interval. In particular, the latter one makes each follower requires the local information only. After that, an improved triggering function that only uses the follower's own information and the neighbors' states at their latest event instants is developed to relax the requirement of the continuous state of the neighbors. In addition, the strategy does not require the information of the topology, nor the eigenvalues of the Laplacian matrix. And if the follower does not have direct connection to the leader, the leader's information is not required either. It is analytically shown that by using the proposed strategy the leader-follower networked system is able to reach consensus without continuous communication among followers. Simulation examples are given to show effectiveness of the proposed control strategy.

  8. Networked Estimation for Event-Based Sampling Systems with Packet Dropouts

    Directory of Open Access Journals (Sweden)

    Young Soo Suh

    2009-04-01

    Full Text Available This paper is concerned with a networked estimation problem in which sensor data are transmitted over the network. In the event-based sampling scheme known as level-crossing or send-on-delta (SOD, sensor data are transmitted to the estimator node if the difference between the current sensor value and the last transmitted one is greater than a given threshold. Event-based sampling has been shown to be more efficient than the time-triggered one in some situations, especially in network bandwidth improvement. However, it cannot detect packet dropout situations because data transmission and reception do not use a periodical time-stamp mechanism as found in time-triggered sampling systems. Motivated by this issue, we propose a modified event-based sampling scheme called modified SOD in which sensor data are sent when either the change of sensor output exceeds a given threshold or the time elapses more than a given interval. Through simulation results, we show that the proposed modified SOD sampling significantly improves estimation performance when packet dropouts happen.

  9. Event-Based Impulsive Control of Continuous-Time Dynamic Systems and Its Application to Synchronization of Memristive Neural Networks.

    Science.gov (United States)

    Zhu, Wei; Wang, Dandan; Liu, Lu; Feng, Gang

    2017-08-18

    This paper investigates exponential stabilization of continuous-time dynamic systems (CDSs) via event-based impulsive control (EIC) approaches, where the impulsive instants are determined by certain state-dependent triggering condition. The global exponential stability criteria via EIC are derived for nonlinear and linear CDSs, respectively. It is also shown that there is no Zeno-behavior for the concerned closed loop control system. In addition, the developed event-based impulsive scheme is applied to the synchronization problem of master and slave memristive neural networks. Furthermore, a self-triggered impulsive control scheme is developed to avoid continuous communication between the master system and slave system. Finally, two numerical simulation examples are presented to illustrate the effectiveness of the proposed event-based impulsive controllers.

  10. A coupled classification - evolutionary optimization model for contamination event detection in water distribution systems.

    Science.gov (United States)

    Oliker, Nurit; Ostfeld, Avi

    2014-03-15

    This study describes a decision support system, alerts for contamination events in water distribution systems. The developed model comprises a weighted support vector machine (SVM) for the detection of outliers, and a following sequence analysis for the classification of contamination events. The contribution of this study is an improvement of contamination events detection ability and a multi-dimensional analysis of the data, differing from the parallel one-dimensional analysis conducted so far. The multivariate analysis examines the relationships between water quality parameters and detects changes in their mutual patterns. The weights of the SVM model accomplish two goals: blurring the difference between sizes of the two classes' data sets (as there are much more normal/regular than event time measurements), and adhering the time factor attribute by a time decay coefficient, ascribing higher importance to recent observations when classifying a time step measurement. All model parameters were determined by data driven optimization so the calibration of the model was completely autonomic. The model was trained and tested on a real water distribution system (WDS) data set with randomly simulated events superimposed on the original measurements. The model is prominent in its ability to detect events that were only partly expressed in the data (i.e., affecting only some of the measured parameters). The model showed high accuracy and better detection ability as compared to previous modeling attempts of contamination event detection. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Explicit simulation of a midlatitude Mesoscale Convective System

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, G.D.; Cotton, W.R. [Colorado State Univ., Fort Collins, CO (United States)

    1996-04-01

    We have explicitly simulated the mesoscale convective system (MCS) observed on 23-24 June 1985 during PRE-STORM, the Preliminary Regional Experiment for the Stormscale Operational and Research and Meterology Program. Stensrud and Maddox (1988), Johnson and Bartels (1992), and Bernstein and Johnson (1994) are among the researchers who have investigated various aspects of this MCS event. We have performed this MCS simulation (and a similar one of a tropical MCS; Alexander and Cotton 1994) in the spirit of the Global Energy and Water Cycle Experiment Cloud Systems Study (GCSS), in which cloud-resolving models are used to assist in the formulation and testing of cloud parameterization schemes for larger-scale models. In this paper, we describe (1) the nature of our 23-24 June MCS dimulation and (2) our efforts to date in using our explicit MCS simulations to assist in the development of a GCM parameterization for mesoscale flow branches. The paper is organized as follows. First, we discuss the synoptic situation surrounding the 23-24 June PRE-STORM MCS followed by a discussion of the model setup and results of our simulation. We then discuss the use of our MCS simulation. We then discuss the use of our MCS simulations in developing a GCM parameterization for mesoscale flow branches and summarize our results.

  12. Analysis of convection-permitting simulations for capturing heavy rainfall events over Myanmar Region

    Science.gov (United States)

    Acierto, R. A. E.; Kawasaki, A.

    2017-12-01

    Perennial flooding due to heavy rainfall events causes strong impacts on the society and economy. With increasing pressures of rapid development and potential for climate change impacts, Myanmar experiences a rapid increase in disaster risk. Heavy rainfall hazard assessment is key on quantifying such disaster risk in both current and future conditions. Downscaling using Regional Climate Models (RCM) such as Weather Research and Forecast model have been used extensively for assessing such heavy rainfall events. However, usage of convective parameterizations can introduce large errors in simulating rainfall. Convective-permitting simulations have been used to deal with this problem by increasing the resolution of RCMs to 4km. This study focuses on the heavy rainfall events during the six-year (2010-2015) wet period season from May to September in Myanmar. The investigation primarily utilizes rain gauge observation for comparing downscaled heavy rainfall events in 4km resolution using ERA-Interim as boundary conditions using 12km-4km one-way nesting method. The study aims to provide basis for production of high-resolution climate projections over Myanmar in order to contribute for flood hazard and risk assessment.

  13. Event notification system with a PLC

    International Nuclear Information System (INIS)

    Kawase, M.; Yoshikawa, Hiroshi; Sakaki, Hironao; Takahashi, Hiroki; Sako, Hiroyuki; Kamiya, Junichiro; Takayanagi, Tomohiro

    2004-01-01

    When an interlock occurs in the equipment, it is required to notify the upper rank control system of the Interlock and receive information for apparatus information in the upper rank control system as at high speed as possible. In the apparatus using FA-M3, it can respond to this by using the notice function of an event. This report shows the event notification system with a PLC based Kicker electromagnet power supply for 3GeV RCS. (author)

  14. Simulation and verification of transient events in large wind power installations

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen, P.; Hansen, A.D.; Christensen, P.; Meritz, M.; Bech, J.; Bak-Jensen, B.; Nielsen, H.

    2003-10-01

    Models for wind power installations excited by transient events have been developed and verified. A number of cases have been investigated, including comparisons of simulations of a three-phase short circuit, validation with measurements of tripping of single wind turbine, islanding of a group of two wind turbines, and voltage steps caused by tripping of wind turbines and by manual transformer tap-changing. A Benchmark model is also presented, enabling the reader to test own simulation results against results obtained with models developed in EMTDC and DIgSILENT. (au)

  15. Discrete event simulation for exploring strategies: an urban water management case.

    Science.gov (United States)

    Huang, Dong-Bin; Scholz, Roland W; Gujer, Willi; Chitwood, Derek E; Loukopoulos, Peter; Schertenleib, Roland; Siegrist, Hansruedi

    2007-02-01

    This paper presents a model structure aimed at offering an overview of the various elements of a strategy and exploring their multidimensional effects through time in an efficient way. It treats a strategy as a set of discrete events planned to achieve a certain strategic goal and develops a new form of causal networks as an interfacing component between decision makers and environment models, e.g., life cycle inventory and material flow models. The causal network receives a strategic plan as input in a discrete manner and then outputs the updated parameter sets to the subsequent environmental models. Accordingly, the potential dynamic evolution of environmental systems caused by various strategies can be stepwise simulated. It enables a way to incorporate discontinuous change in models for environmental strategy analysis, and enhances the interpretability and extendibility of a complex model by its cellular constructs. It is exemplified using an urban water management case in Kunming, a major city in Southwest China. By utilizing the presented method, the case study modeled the cross-scale interdependencies of the urban drainage system and regional water balance systems, and evaluated the effectiveness of various strategies for improving the situation of Dianchi Lake.

  16. Modeling Anti-Air Warfare With Discrete Event Simulation and Analyzing Naval Convoy Operations

    Science.gov (United States)

    2016-06-01

    W., & Scheaffer, R. L. (2008). Mathematical statistics with applications . Belmont, CA: Cengage Learning. 118 THIS PAGE INTENTIONALLY LEFT BLANK...WARFARE WITH DISCRETE EVENT SIMULATION AND ANALYZING NAVAL CONVOY OPERATIONS by Ali E. Opcin June 2016 Thesis Advisor: Arnold H. Buss Co...REPORT DATE June 2016 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE MODELING ANTI-AIR WARFARE WITH DISCRETE EVENT

  17. Integrated training support system for PWR operator training simulator

    International Nuclear Information System (INIS)

    Sakaguchi, Junichi; Komatsu, Yasuki

    1999-01-01

    The importance of operator training using operator training simulator has been recognized intensively. Since 1986, we have been developing and providing many PWR simulators in Japan. We also have developed some training support systems connected with the simulator and the integrated training support system to improve training effect and to reduce instructor's workload. This paper describes the concept and the effect of the integrated training support system and of the following sub-systems. We have PES (Performance Enhancement System) that evaluates training performance automatically by analyzing many plant parameters and operation data. It can reduce the deviation of training performance evaluation between instructors. PEL (Parameter and Event data Logging system), that is the subset of PES, has some data-logging functions. And we also have TPES (Team Performance Enhancement System) that is used aiming to improve trainees' ability for communication between operators. Trainee can have conversation with virtual trainees that TPES plays automatically. After that, TPES automatically display some advice to be improved. RVD (Reactor coolant system Visual Display) displays the distributed hydraulic-thermal condition of the reactor coolant system in real-time graphically. It can make trainees understand the inside plant condition in more detail. These sub-systems have been used in a training center and have contributed the improvement of operator training and have gained in popularity. (author)

  18. An intelligent environment for dynamic simulation program generation of nuclear reactor systems

    International Nuclear Information System (INIS)

    Ishizaka, Hiroaki; Gofuku, Akio; Yoshikawa, Hidekazu

    2004-01-01

    A graphical user interface system was developed for the two dynamic simulation systems based on modular programming methods: MSS and DSNP. The following works were made in conjunction with the system development: (1) conversion of the module libraries of both DSNP and MSS, (2) extension of DSNP- pre-compiler, (3) graphical interface for module integration, and (4) automatic converter of simple language descriptions for DSNP, where (1) and (2) were made on an engineering work station, while the rest (3) and (4), on Macintosh HyperCard. By using the graphical interface, a user can specify the structure of a simulation model, geometrical data, initial values of variables, etc. only by handling modules as icon on the pallet fields. The use of extended DSNP pre-compiler then generates the final product of dynamic simulation program automatically. The capability and effectiveness of the system was confirmed by a sample simulation of PWR SBLOCA transient in PORV stuck open event. (author)

  19. Event Based Simulator for Parallel Computing over the Wide Area Network for Real Time Visualization

    Science.gov (United States)

    Sundararajan, Elankovan; Harwood, Aaron; Kotagiri, Ramamohanarao; Satria Prabuwono, Anton

    As the computational requirement of applications in computational science continues to grow tremendously, the use of computational resources distributed across the Wide Area Network (WAN) becomes advantageous. However, not all applications can be executed over the WAN due to communication overhead that can drastically slowdown the computation. In this paper, we introduce an event based simulator to investigate the performance of parallel algorithms executed over the WAN. The event based simulator known as SIMPAR (SIMulator for PARallel computation), simulates the actual computations and communications involved in parallel computation over the WAN using time stamps. Visualization of real time applications require steady stream of processed data flow for visualization purposes. Hence, SIMPAR may prove to be a valuable tool to investigate types of applications and computing resource requirements to provide uninterrupted flow of processed data for real time visualization purposes. The results obtained from the simulation show concurrence with the expected performance using the L-BSP model.

  20. Integrated computer control system CORBA-based simulator FY98 LDRD project final summary report

    International Nuclear Information System (INIS)

    Bryant, R M; Holloway, F W; Van Arsdall, P J.

    1999-01-01

    The CORBA-based Simulator was a Laboratory Directed Research and Development (LDRD) project that applied simulation techniques to explore critical questions about distributed control architecture. The simulator project used a three-prong approach comprised of a study of object-oriented distribution tools, computer network modeling, and simulation of key control system scenarios. This summary report highlights the findings of the team and provides the architectural context of the study. For the last several years LLNL has been developing the Integrated Computer Control System (ICCS), which is an abstract object-oriented software framework for constructing distributed systems. The framework is capable of implementing large event-driven control systems for mission-critical facilities such as the National Ignition Facility (NIF). Tools developed in this project were applied to the NIF example architecture in order to gain experience with a complex system and derive immediate benefits from this LDRD. The ICCS integrates data acquisition and control hardware with a supervisory system, and reduces the amount of new coding and testing necessary by providing prebuilt components that can be reused and extended to accommodate specific additional requirements. The framework integrates control point hardware with a supervisory system by providing the services needed for distributed control such as database persistence, system start-up and configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. The design is interoperable among computers of different kinds and provides plug-in software connections by leveraging a common object request brokering architecture (CORBA) to transparently distribute software objects across the network of computers. Because object broker distribution applied to control systems is relatively new and its inherent performance is roughly threefold less than traditional point

  1. Performance and efficiency of geotextile-supported erosion control measures during simulated rainfall events

    Science.gov (United States)

    Obriejetan, Michael; Rauch, Hans Peter; Florineth, Florin

    2013-04-01

    Erosion control systems consisting of technical and biological components are widely accepted and proven to work well if installed properly with regard to site-specific parameters. A wide range of implementation measures for this specific protection purpose is existent and new, in particular technical solutions are constantly introduced into the market. Nevertheless, especially vegetation aspects of erosion control measures are frequently disregarded and should be considered enhanced against the backdrop of the development and realization of adaptation strategies in an altering environment due to climate change associated effects. Technical auxiliaries such as geotextiles typically used for slope protection (nettings, blankets, turf reinforcement mats etc.) address specific features and due to structural and material diversity, differing effects on sediment yield, surface runoff and vegetational development seem evident. Nevertheless there is a knowledge gap concerning the mutual interaction processes between technical and biological components respectively specific comparable data on erosion-reducing effects of technical-biological erosion protection systems are insufficient. In this context, an experimental arrangement was set up to study the correlated influences of geotextiles and vegetation and determine its (combined) effects on surface runoff and soil loss during simulated heavy rainfall events. Sowing vessels serve as testing facilities which are filled with top soil under application of various organic and synthetic geotextiles and by using a reliable drought resistant seed mixture. Regular vegetational monitoring as well as two rainfall simulation runs with four repetitions of each variant were conducted. Therefore a portable rainfall simulator with standardized rainfall intensity of 240 mm h-1 and three minute rainfall duration was used to stress these systems on different stages of plant development at an inclination of 30 degrees. First results show

  2. Non-axisymmetric simulation of the vertical displacement event in tokamaks

    International Nuclear Information System (INIS)

    Lim, Y.Y.; Lee, J.K.; Shin, K.J.; Hur, M.S.

    1999-01-01

    Tokamak plasmas with highly elongated cross sections are subject to a vertical displacement event (VDE). The nonlinear magnetohydrodynamic (MHD) evolutions of tokamak plasmas during the VDE are simulated by a three-dimensional MHD code as a combination of N=0 and N=1 components. The nonlinear evolution during the VDE is strongly affected by the relative amplitude of the N=1 to the N=0 modes. (author)

  3. Hybrid Network Simulation for the ATLAS Trigger and Data Acquisition (TDAQ) System

    CERN Document Server

    Bonaventura, Matias Alejandro; The ATLAS collaboration; Castro, Rodrigo Daniel; Foguelman, Daniel Jacob

    2015-01-01

    The poster shows the ongoing research in the ATLAS TDAQ group in collaboration with the University of Buenos Aires in the area of hybrid data network simulations. he Data Network and Processing Cluster filters data in real-time, achieving a rejection factor in the order of 40000x and has real-time latency constrains. The dataflow between the processing units (TPUs) and Readout System (ROS) presents a “TCP Incast”-type network pathology which TCP cannot handle it efficiently. A credits system is in place which limits rate of queries and reduces latency. This large computer network, and the complex dataflow has been modelled and simulated using a PowerDEVS, a DEVS-based simulator. The simulation has been validated and used to produce what-if scenarios in the real network. Network Simulation with Hybrid Flows: Speedups and accuracy, combined • For intensive network traffic, Discrete Event simulation models (packet-level granularity) soon becomes prohibitive: Too high computing demands. • Fluid Flow simul...

  4. A Novel Idea for Optimizing Condition-Based Maintenance Using Genetic Algorithms and Continuous Event Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-01-01

    Full Text Available Effective maintenance strategies are of utmost significance for system engineering due to their direct linkage with financial aspects and safety of the plants’ operation. At a point where the state of a system, for instance, level of its deterioration, can be constantly observed, a strategy based on condition-based maintenance (CBM may be affected; wherein upkeep of the system is done progressively on the premise of monitored state of the system. In this article, a multicomponent framework is considered that is continuously kept under observation. In order to decide an optimal deterioration stage for the said system, Genetic Algorithm (GA technique has been utilized that figures out when its preventive maintenance should be carried out. The system is configured into a multiobjective problem that is aimed at optimizing the two desired objectives, namely, profitability and accessibility. For the sake of reality, a prognostic model portraying the advancements of deteriorating system has been employed that will be based on utilization of continuous event simulation techniques. In this regard, Monte Carlo (MC simulation has been shortlisted as it can take into account a wide range of probable options that can help in reducing uncertainty. The inherent benefits proffered by the said simulation technique are fully utilized to display various elements of a deteriorating system working under stressed environment. The proposed synergic model (GA and MC is considered to be more effective due to the employment of “drop-by-drop approach” that permits successful drive of the related search process with regard to the best optimal solutions.

  5. Physics Detector Simulation Facility Phase II system software description

    International Nuclear Information System (INIS)

    Scipioni, B.; Allen, J.; Chang, C.; Huang, J.; Liu, J.; Mestad, S.; Pan, J.; Marquez, M.; Estep, P.

    1993-05-01

    This paper presents the Physics Detector Simulation Facility (PDSF) Phase II system software. A key element in the design of a distributed computing environment for the PDSF has been the separation and distribution of the major functions. The facility has been designed to support batch and interactive processing, and to incorporate the file and tape storage systems. By distributing these functions, it is often possible to provide higher throughput and resource availability. Similarly, the design is intended to exploit event-level parallelism in an open distributed environment

  6. Reactor protection system software test-case selection based on input-profile considering concurrent events and uncertainties

    International Nuclear Information System (INIS)

    Khalaquzzaman, M.; Lee, Seung Jun; Cho, Jaehyun; Jung, Wondea

    2016-01-01

    Recently, the input-profile-based testing for safety critical software has been proposed for determining the number of test cases and quantifying the failure probability of the software. Input-profile of a reactor protection system (RPS) software is the input which causes activation of the system for emergency shutdown of a reactor. This paper presents a method to determine the input-profile of a RPS software which considers concurrent events/transients. A deviation of a process parameter value begins through an event and increases owing to the concurrent multi-events depending on the correlation of process parameters and severity of incidents. A case of reactor trip caused by feedwater loss and main steam line break is simulated and analyzed to determine the RPS software input-profile and estimate the number of test cases. The different sizes of the main steam line breaks (e.g., small, medium, large break) with total loss of feedwater supply are considered in constructing the input-profile. The uncertainties of the simulation related to the input-profile-based software testing are also included. Our study is expected to provide an option to determine test cases and quantification of RPS software failure probability. (author)

  7. Numerical Simulation of a Breaking Gravity Wave Event Over Greenland Observed During Fastex

    National Research Council Canada - National Science Library

    Doyle, James

    1997-01-01

    Measurements from the NOAA G4 research aircraft and high-resolution numerical simulations are used to study the evolution and dynamics of a large-amplitude gravity wave event over Greenland that took...

  8. Simulation of Greenhouse Climate Monitoring and Control with Wireless Sensor Network and Event-Based Control

    Science.gov (United States)

    Pawlowski, Andrzej; Guzman, Jose Luis; Rodríguez, Francisco; Berenguel, Manuel; Sánchez, José; Dormido, Sebastián

    2009-01-01

    Monitoring and control of the greenhouse environment play a decisive role in greenhouse production processes. Assurance of optimal climate conditions has a direct influence on crop growth performance, but it usually increases the required equipment cost. Traditionally, greenhouse installations have required a great effort to connect and distribute all the sensors and data acquisition systems. These installations need many data and power wires to be distributed along the greenhouses, making the system complex and expensive. For this reason, and others such as unavailability of distributed actuators, only individual sensors are usually located in a fixed point that is selected as representative of the overall greenhouse dynamics. On the other hand, the actuation system in greenhouses is usually composed by mechanical devices controlled by relays, being desirable to reduce the number of commutations of the control signals from security and economical point of views. Therefore, and in order to face these drawbacks, this paper describes how the greenhouse climate control can be represented as an event-based system in combination with wireless sensor networks, where low-frequency dynamics variables have to be controlled and control actions are mainly calculated against events produced by external disturbances. The proposed control system allows saving costs related with wear minimization and prolonging the actuator life, but keeping promising performance results. Analysis and conclusions are given by means of simulation results. PMID:22389597

  9. Simulation of Greenhouse Climate Monitoring and Control with Wireless Sensor Network and Event-Based Control

    Directory of Open Access Journals (Sweden)

    Andrzej Pawlowski

    2009-01-01

    Full Text Available Monitoring and control of the greenhouse environment play a decisive role in greenhouse production processes. Assurance of optimal climate conditions has a direct influence on crop growth performance, but it usually increases the required equipment cost. Traditionally, greenhouse installations have required a great effort to connect and distribute all the sensors and data acquisition systems. These installations need many data and power wires to be distributed along the greenhouses, making the system complex and expensive. For this reason, and others such as unavailability of distributed actuators, only individual sensors are usually located in a fixed point that is selected as representative of the overall greenhouse dynamics. On the other hand, the actuation system in greenhouses is usually composed by mechanical devices controlled by relays, being desirable to reduce the number of commutations of the control signals from security and economical point of views. Therefore, and in order to face these drawbacks, this paper describes how the greenhouse climate control can be represented as an event-based system in combination with wireless sensor networks, where low-frequency dynamics variables have to be controlled and control actions are mainly calculated against events produced by external disturbances. The proposed control system allows saving costs related with wear minimization and prolonging the actuator life, but keeping promising performance results. Analysis and conclusions are given by means of simulation results.

  10. Second-Order Multiagent Systems with Event-Driven Consensus Control

    Directory of Open Access Journals (Sweden)

    Jiangping Hu

    2013-01-01

    Full Text Available Event-driven control scheduling strategies for multiagent systems play a key role in future use of embedded microprocessors of limited resources that gather information and actuate the agent control updates. In this paper, a distributed event-driven consensus problem is considered for a multi-agent system with second-order dynamics. Firstly, two kinds of event-driven control laws are, respectively, designed for both leaderless and leader-follower systems. Then, the input-to-state stability of the closed-loop multi-agent system with the proposed event-driven consensus control is analyzed and the bound of the inter-event times is ensured. Finally, some numerical examples are presented to validate the proposed event-driven consensus control.

  11. Multiobjective optimization of availability and cost in repairable systems design via genetic algorithms and discrete event simulation

    Directory of Open Access Journals (Sweden)

    Isis Didier Lins

    2009-04-01

    Full Text Available This paper attempts to provide a more realistic approach to the characterization of system reliability when handling redundancy allocation problems: it considers repairable series-parallel systems comprised of components subjected to corrective maintenance actions with failure-repair cycles modeled by renewal processes. A multiobjective optimization approach is applied since increasing the number of redundancies not only enlarges system reliability but also its associated costs. Then a multiobjective genetic algorithm is coupled with discrete event simulation and its solutions present the compromise between system reliability and cost. Two examples are provided. In the first one, the proposed algorithm is validated by comparison with results obtained from a system devised as to allow for analytical solutions of the objective functions. The second case analyzes a repairable system subjected to perfect repairs. Results from both examples show that the proposed method can be a valuable tool for the decision maker when choosing the system design.Esse artigo utiliza uma abordagem mais realista para a caracterização da confiabilidade de sistemas em problemas de alocação de redundâncias: são considerados sistemas série-paralelo formados por componentes sujeitos a ações de manutenção corretiva com ciclos de falha-reparo modelados por processos de renovação. É aplicada uma abordagem de otimização multiobjetivo, pois o aumento de redundâncias eleva a confiabilidade do sistema e também os seus custos. Assim, um algoritmo genético multiobjetivo é integrado com simulação discreta de eventos e suas soluções apresentam o compromisso entre confiabilidade e custo do sistema. Dois exemplos são fornecidos. No primeiro, o algoritmo proposto é validado através da comparação com resultados obtidos de um sistema criado de forma a permitir soluções analíticas das funções-objetivo. No segundo, analisa-se um sistema reparável sujeito a

  12. Fractional counts-the simulation of low probability events

    International Nuclear Information System (INIS)

    Coldwell, R.L.; Lasche, G.P.; Jadczyk, A.

    2001-01-01

    The code RobSim has been added to RobWin.1 It simulates spectra resulting from gamma rays striking an array of detectors made up of different components. These are frequently used to set coincidence and anti-coincidence windows that decide if individual events are part of the signal. The first problem addressed is the construction of the detector. Then owing to the statistical nature of the responses of these elements there is a random nature in the response that can be taken into account by including fractional counts in the output spectrum. This somewhat complicates the error analysis, as Poisson statistics are no longer applicable

  13. Numerical simulation of a mistral wind event occuring

    Science.gov (United States)

    Guenard, V.; Caccia, J. L.; Tedeschi, G.

    2003-04-01

    The experimental network of the ESCOMPTE field experiment (june-july 2001) is turned into account to investigate the Mistral wind affecting the Marseille area (South of France). Mistral wind is a northerly flow blowing across the Rhône valley and toward the Mediterranean sea resulting from the dynamical low pressure generated in the wake of the Alps ridge. It brings cold, dry air masses and clear sky conditions over the south-eastern part of France. Up to now, few scientific studies have been carried out on the Mistral wind especially the evolution of its 3-D structure so that its mesoscale numerical simulation is still relevant. Non-hydrostatic RAMS model is performed to better investigate this mesoscale phenomena. Simulations at a 12 km horizontal resolution are compared to boundary layer wind profilers and ground measurements. Preliminary results suit quite well with the Mistral statistical studies carried out by the operational service of Météo-France and observed wind profiles are correctly reproduced by the numerical model RAMS which appears to be an efficient tool for its understanding of Mistral. Owing to the absence of diabatic effect in Mistral events which complicates numerical simulations, the present work is the first step for the validation of RAMS model in that area. Further works will consist on the study of the interaction of Mistral wind with land-sea breeze. Also, RAMS simulations will be combined with aerosol production and ocean circulation models to supply chemists and oceanographers with some answers for their studies.

  14. Multimodal interaction in the perception of impact events displayed via a multichannel audio and simulated structure-borne vibration

    Science.gov (United States)

    Martens, William L.; Woszczyk, Wieslaw

    2005-09-01

    For multimodal display systems in which realistic reproduction of impact events is desired, presenting structure-borne vibration along with multichannel audio recordings has been observed to create a greater sense of immersion in a virtual acoustic environment. Furthermore, there is an increased proportion of reports that the impact event took place within the observer's local area (this is termed ``presence with'' the event, in contrast to ``presence in'' the environment in which the event occurred). While holding the audio reproduction constant, varying the intermodal arrival time and level of mechanically displayed, synthetic whole-body vibration revealed a number of other subjective attributes that depend upon multimodal interaction in the perception of a representative impact event. For example, when the structure-borne component of the displayed impact event arrived 10 to 20 ms later than the airborne component, the intermodal delay was not only tolerated, but gave rise to an increase in the proportion of reports that the impact event had greater power. These results have enabled the refinement of a multimodal simulation in which the manipulation of synthetic whole-body vibration can be used to control perceptual attributes of impact events heard within an acoustic environment reproduced via a multichannel loudspeaker array.

  15. Productivity improvement using discrete events simulation

    Science.gov (United States)

    Hazza, M. H. F. Al; Elbishari, E. M. Y.; Ismail, M. Y. Bin; Adesta, E. Y. T.; Rahman, Nur Salihah Binti Abdul

    2018-01-01

    The increasing in complexity of the manufacturing systems has increased the cost of investment in many industries. Furthermore, the theoretical feasibility studies are not enough to take the decision in investing for that particular area. Therefore, the development of the new advanced software is protecting the manufacturer from investing money in production lines that may not be sufficient and effective with their requirement in terms of machine utilization and productivity issue. By conducting a simulation, using accurate model will reduce and eliminate the risk associated with their new investment. The aim of this research is to prove and highlight the importance of simulation in decision-making process. Delmia quest software was used as a simulation program to run a simulation for the production line. A simulation was first done for the existing production line and show that the estimated production rate is 261 units/day. The results have been analysed based on utilization percentage and idle time. Two different scenarios have been proposed based on different objectives. The first scenario is by focusing on low utilization machines and their idle time, this was resulted in minimizing the number of machines used by three with the addition of the works who maintain them without having an effect on the production rate. The second scenario is to increase the production rate by upgrading the curing machine which lead to the increase in the daily productivity by 7% from 261 units to 281 units.

  16. Development of knowledge-based operator support system for steam generator water leak events in FBR plants

    International Nuclear Information System (INIS)

    Arikawa, Hiroshi; Ida, Toshio; Matsumoto, Hiroyuki; Kishida, Masako

    1991-01-01

    A knowledge engineering approach to operation support system would be useful in maintaining safe and steady operation in nuclear plants. This paper describes a knowledge-based operation support system which assists the operators during steam generator water leak events in FBR plants. We have developed a real-time expert system. The expert system adopts hierarchical knowledge representation corresponding to the 'plant abnormality model'. A technique of signal validation which uses knowledge of symptom propagation are applied to diagnosis. In order to verify the knowledge base concerning steam generator water leak events in FBR plants, a simulator is linked to the expert system. It is revealed that diagnosis based on 'plant abnormality model' and signal validation using knowledge of symptom propagation could work successfully. Also, it is suggested that the expert system could be useful in supporting FBR plants operations. (author)

  17. Security Information System Digital Simulation

    OpenAIRE

    Tao Kuang; Shanhong Zhu

    2015-01-01

    The study built a simulation model for the study of food security information system relay protection. MATLAB-based simulation technology can support the analysis and design of food security information systems. As an example, the food security information system fault simulation, zero-sequence current protection simulation and transformer differential protection simulation are presented in this study. The case studies show that the simulation of food security information system relay protect...

  18. Handling of the Generation of Primary Events in Gauss, the LHCb Simulation Framework

    CERN Multimedia

    Corti, G; Brambach, T; Brook, N H; Gauvin, N; Harrison, K; Harrison, P; He, J; Ilten, P J; Jones, C R; Lieng, M H; Manca, G; Miglioranzi, S; Robbe, P; Vagnoni, V; Whitehead, M; Wishahi, J

    2010-01-01

    The LHCb simulation application, Gauss, consists of two independent phases, the generation of the primary event and the tracking of particles produced in the experimental setup. For the LHCb experimental program it is particularly important to model B meson decays: the EvtGen code developed in CLEO and BaBar has been chosen and customized for non coherent B production as occuring in pp collisions at the LHC. The initial proton-proton collision is provided by a different generator engine, currently Pythia 6 for massive production of signal and generic pp collisions events. Beam gas events, background events originating from proton halo, cosmics and calibration events for different detectors can be generated in addition to pp collisions. Different generator packages are available in the physics community or specifically developed in LHCb, and are used for the different purposes. Running conditions affecting the events generated such as the size of the luminous region, the number of collisions occuring in a bunc...

  19. Using the Integration of Discrete Event and Agent-Based Simulation to Enhance Outpatient Service Quality in an Orthopedic Department

    Directory of Open Access Journals (Sweden)

    Cholada Kittipittayakorn

    2016-01-01

    Full Text Available Many hospitals are currently paying more attention to patient satisfaction since it is an important service quality index. Many Asian countries’ healthcare systems have a mixed-type registration, accepting both walk-in patients and scheduled patients. This complex registration system causes a long patient waiting time in outpatient clinics. Different approaches have been proposed to reduce the waiting time. This study uses the integration of discrete event simulation (DES and agent-based simulation (ABS to improve patient waiting time and is the first attempt to apply this approach to solve this key problem faced by orthopedic departments. From the data collected, patient behaviors are modeled and incorporated into a massive agent-based simulation. The proposed approach is an aid for analyzing and modifying orthopedic department processes, allows us to consider far more details, and provides more reliable results. After applying the proposed approach, the total waiting time of the orthopedic department fell from 1246.39 minutes to 847.21 minutes. Thus, using the correct simulation model significantly reduces patient waiting time in an orthopedic department.

  20. Using the Integration of Discrete Event and Agent-Based Simulation to Enhance Outpatient Service Quality in an Orthopedic Department.

    Science.gov (United States)

    Kittipittayakorn, Cholada; Ying, Kuo-Ching

    2016-01-01

    Many hospitals are currently paying more attention to patient satisfaction since it is an important service quality index. Many Asian countries' healthcare systems have a mixed-type registration, accepting both walk-in patients and scheduled patients. This complex registration system causes a long patient waiting time in outpatient clinics. Different approaches have been proposed to reduce the waiting time. This study uses the integration of discrete event simulation (DES) and agent-based simulation (ABS) to improve patient waiting time and is the first attempt to apply this approach to solve this key problem faced by orthopedic departments. From the data collected, patient behaviors are modeled and incorporated into a massive agent-based simulation. The proposed approach is an aid for analyzing and modifying orthopedic department processes, allows us to consider far more details, and provides more reliable results. After applying the proposed approach, the total waiting time of the orthopedic department fell from 1246.39 minutes to 847.21 minutes. Thus, using the correct simulation model significantly reduces patient waiting time in an orthopedic department.

  1. Evaluating resilience of DNP3-controlled SCADA systems against event buffer flooding

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Guanhua [Los Alamos National Laboratory; Nicol, David M [UNIV OF IL; Jin, Dong [UNIV OF IL

    2010-12-16

    The DNP3 protocol is widely used in SCADA systems (particularly electrical power) as a means of communicating observed sensor state information back to a control center. Typical architectures using DNP3 have a two level hierarchy, where a specialized data aggregator device receives observed state from devices within a local region, and the control center collects the aggregated state from the data aggregator. The DNP3 communication between control center and data aggregator is asynchronous with the DNP3 communication between data aggregator and relays; this leads to the possibility of completely filling a data aggregator's buffer of pending events, when a relay is compromised or spoofed and sends overly many (false) events to the data aggregator. This paper investigates how a real-world SCADA device responds to event buffer flooding. A Discrete-Time Markov Chain (DTMC) model is developed for understanding this. The DTMC model is validated by a Moebius simulation model and data collected on real SCADA testbed.

  2. Simulation of air admission in a propeller hydroturbine during transient events

    Science.gov (United States)

    Nicolle, J.; Morissette, J.-F.

    2016-11-01

    In this study, multiphysic simulations are carried out in order to model fluid loading and structural stresses on propeller blades during startup and runaway. It is found that air admission plays an important role during these transient events and that biphasic simulations are therefore required. At the speed no load regime, a large air pocket with vertical free surface forms in the centre of the runner displacing the water flow near the shroud. This significantly affects the torque developed on the blades and thus structural loading. The resulting pressures are applied to a quasi-static structural model and good agreement is obtained with experimental strain gauge data.

  3. Discrete event simulation of the Defense Waste Processing Facility (DWPF) analytical laboratory

    International Nuclear Information System (INIS)

    Shanahan, K.L.

    1992-02-01

    A discrete event simulation of the Savannah River Site (SRS) Defense Waste Processing Facility (DWPF) analytical laboratory has been constructed in the GPSS language. It was used to estimate laboratory analysis times at process analytical hold points and to study the effect of sample number on those times. Typical results are presented for three different simultaneous representing increasing levels of complexity, and for different sampling schemes. Example equipment utilization time plots are also included. SRS DWPF laboratory management and chemists found the simulations very useful for resource and schedule planning

  4. Research on a Hierarchical Dynamic Automatic Voltage Control System Based on the Discrete Event-Driven Method

    Directory of Open Access Journals (Sweden)

    Yong Min

    2013-06-01

    Full Text Available In this paper, concepts and methods of hybrid control systems are adopted to establish a hierarchical dynamic automatic voltage control (HD-AVC system, realizing the dynamic voltage stability of power grids. An HD-AVC system model consisting of three layers is built based on the hybrid control method and discrete event-driven mechanism. In the Top Layer, discrete events are designed to drive the corresponding control block so as to avoid solving complex multiple objective functions, the power system’s characteristic matrix is formed and the minimum amplitude eigenvalue (MAE is calculated through linearized differential-algebraic equations. MAE is applied to judge the system’s voltage stability and security and construct discrete events. The Middle Layer is responsible for management and operation, which is also driven by discrete events. Control values of the control buses are calculated based on the characteristics of power systems and the sensitivity method. Then control values generate control strategies through the interface block. In the Bottom Layer, various control devices receive and implement the control commands from the Middle Layer. In this way, a closed-loop power system voltage control is achieved. Computer simulations verify the validity and accuracy of the HD-AVC system, and verify that the proposed HD-AVC system is more effective than normal voltage control methods.

  5. PWR station blackout transient simulation in the INER integral system test facility

    International Nuclear Information System (INIS)

    Liu, T.J.; Lee, C.H.; Hong, W.T.; Chang, Y.H.

    2004-01-01

    Station blackout transient (or TMLB' scenario) in a pressurized water reactor (PWR) was simulated using the INER Integral System Test Facility (IIST) which is a 1/400 volumetrically-scaled reduce-height and reduce-pressure (RHRP) simulator of a Westinghouse three-loop PWR. Long-term thermal-hydraulic responses including the secondary boil-off and the subsequent primary saturation, pressurization and core uncovery were simulated based on the assumptions of no offsite and onsite power, feedwater and operator actions. The results indicate that two-phase discharge is the major depletion mode since it covers 81.3% of the total amount of the coolant inventory loss. The primary coolant inventory has experienced significant re-distribution during a station blackout transient. The decided parameter to avoid the core overheating is not the total amount of the coolant inventory remained in the primary core cooling system but only the part of coolant left in the pressure vessel. The sequence of significant events during transient for the IIST were also compared with those of the ROSA-IV large-scale test facility (LSTF), which is a 1/48 volumetrically-scaled full-height and full-pressure (FHFP) simulator of a PWR. The comparison indicates that the sequence and timing of these events during TMLB' transient studied in the RHRP IIST facility are generally consistent with those of the FHFP LSTF. (author)

  6. Identification of coronal heating events in 3D simulations

    Science.gov (United States)

    Kanella, Charalambos; Gudiksen, Boris V.

    2017-07-01

    Context. The solar coronal heating problem has been an open question in the science community since 1939. One of the proposed models for the transport and release of mechanical energy generated in the sub-photospheric layers and photosphere is the magnetic reconnection model that incorporates Ohmic heating, which releases a part of the energy stored in the magnetic field. In this model many unresolved flaring events occur in the solar corona, releasing enough energy to heat the corona. Aims: The problem with the verification and quantification of this model is that we cannot resolve small scale events due to limitations of the current observational instrumentation. Flaring events have scaling behavior extending from large X-class flares down to the so far unobserved nanoflares. Histograms of observable characteristics of flares show powerlaw behavior for energy release rate, size, and total energy. Depending on the powerlaw index of the energy release, nanoflares might be an important candidate for coronal heating; we seek to find that index. Methods: In this paper we employ a numerical three-dimensional (3D)-magnetohydrodynamic (MHD) simulation produced by the numerical code Bifrost, which enables us to look into smaller structures, and a new technique to identify the 3D heating events at a specific instant. The quantity we explore is the Joule heating, a term calculated directly by the code, which is explicitly correlated with the magnetic reconnection because it depends on the curl of the magnetic field. Results: We are able to identify 4136 events in a volume 24 × 24 × 9.5 Mm3 (I.e., 768 × 786 × 331 grid cells) of a specific snapshot. We find a powerlaw slope of the released energy per second equal to αP = 1.5 ± 0.02, and two powerlaw slopes of the identified volume equal to αV = 1.53 ± 0.03 and αV = 2.53 ± 0.22. The identified energy events do not represent all the released energy, but of the identified events, the total energy of the largest events

  7. Use of the Hadoop structured storage tools for the ATLAS EventIndex event catalogue

    CERN Document Server

    Favareto, Andrea; The ATLAS collaboration; Cardenas Zarate, Simon Ernesto; Cranshaw, Jack; Fernandez Casani, Alvaro; Gallas, Elizabeth; Gonzalez de la Hoz, Santiago; Hrivnac, Julius; Malon, David; Prokoshin, Fedor; Salt, Jose; Sanchez, Javier; Toebbicke, Rainer; Yuan, Ruijun; Garcia Montoro, Carlos

    2015-01-01

    The ATLAS experiment collects billions of events per year of data-taking, and processes them to make them available for physics analysis in several different formats. An even larger amount of events is in addition simulated according to physics and detector models and then reconstructed and analysed to be compared to real events. The EventIndex is a catalogue of all events in each production stage; it includes for each event a few identification parameters, some basic non-mutable information coming from the online system, and the references to the files that contain the event in each format (plus the internal pointers to the event within each file for quick retrieval). Each EventIndex record is logically simple but the system has to hold many tens of billions of records, all equally important. The Hadoop technology was selected at the start of the EventIndex project development in 2012 and proved to be robust and flexible to accommodate this kind of information; both the insertion times and query response tim...

  8. Simulation of rainfall-runoff for major flash flood events in Karachi

    Science.gov (United States)

    Zafar, Sumaira

    2016-07-01

    Metropolitan city Karachi has strategic importance for Pakistan. With the each passing decade the city is facing urban sprawl and rapid population growth. These rapid changes directly affecting the natural resources of city including its drainage pattern. Karachi has three major cities Malir River with the catchment area of 2252 sqkm and Lyari River has catchment area about 470.4 sqkm. These are non-perennial rivers and active only during storms. Change of natural surfaces into hard pavement causing an increase in rainfall-runoff response. Curve Number is increased which is now causing flash floods in the urban locality of Karachi. There is only one gauge installed on the upstream of the river but there no record for the discharge. Only one gauge located at the upstream is not sufficient for discharge measurements. To simulate the maximum discharge of Malir River rainfall (1985 to 2014) data were collected from Pakistan meteorological department. Major rainfall events use to simulate the rainfall runoff. Maximum rainfall-runoff response was recorded in during 1994, 2007 and 2013. This runoff causes damages and inundation in floodplain areas of Karachi. These flash flooding events not only damage the property but also cause losses of lives

  9. Confirmatory simulation of safety and operational transients in LMFBR systems

    International Nuclear Information System (INIS)

    Guppy, J.G.; Agrawal, A.K.

    1978-01-01

    Operational and safety transients that may originate anywhere in an LMFBR system must be adequately simulated to assist in safety evaluation and plant design efforts. This paper describes an advanced thermohydraulic transient code, the Super System Code (SSC), that may be used for confirmatory safety evaluations of plant wide events, such as assurance of adequate decay heat removal capability under natural circulation conditions, and presents results obtained with SSC illustrating the degree of modelling detail present in the code as well as the computing efficiency. (author)

  10. Development of the simulation system IMPACT for analysis of nuclear power plant severe accidents

    International Nuclear Information System (INIS)

    Naitoh, Masanori; Ujita, Hiroshi; Nagumo, Hiroichi

    1997-01-01

    The Nuclear Power Engineering Corporation (NUPEC) has initiated a long-term program to develop the simulation system IMPACT for analysis of hypothetical severe accidents in nuclear power plants. IMPACT employs advanced methods of physical modeling and numerical computation, and can simulate a wide spectrum of senarios ranging from normal operation to hypothetical, beyond-design-basis-accident events. Designed as a large-scale system of interconnected, hierarchical modules, IMPACT's distinguishing features include mechanistic models based on first principles and high speed simulation on parallel processing computers. The present plan is a ten-year program starting from 1993, consisting of the initial one-year of preparatory work followed by three technical phases: Phase-1 for development of a prototype system; Phase-2 for completion of the simulation system, incorporating new achievements from basic studies; and Phase-3 for refinement through extensive verification and validation against test results and available real plant data

  11. Event-Triggered Distributed Control of Nonlinear Interconnected Systems Using Online Reinforcement Learning With Exploration.

    Science.gov (United States)

    Narayanan, Vignesh; Jagannathan, Sarangapani

    2017-09-07

    In this paper, a distributed control scheme for an interconnected system composed of uncertain input affine nonlinear subsystems with event triggered state feedback is presented by using a novel hybrid learning scheme-based approximate dynamic programming with online exploration. First, an approximate solution to the Hamilton-Jacobi-Bellman equation is generated with event sampled neural network (NN) approximation and subsequently, a near optimal control policy for each subsystem is derived. Artificial NNs are utilized as function approximators to develop a suite of identifiers and learn the dynamics of each subsystem. The NN weight tuning rules for the identifier and event-triggering condition are derived using Lyapunov stability theory. Taking into account, the effects of NN approximation of system dynamics and boot-strapping, a novel NN weight update is presented to approximate the optimal value function. Finally, a novel strategy to incorporate exploration in online control framework, using identifiers, is introduced to reduce the overall cost at the expense of additional computations during the initial online learning phase. System states and the NN weight estimation errors are regulated and local uniformly ultimately bounded results are achieved. The analytical results are substantiated using simulation studies.

  12. Feasibility of performing high resolution cloud-resolving simulations of historic extreme events: The San Fruttuoso (Liguria, italy) case of 1915.

    Science.gov (United States)

    Parodi, Antonio; Boni, Giorgio; Ferraris, Luca; Gallus, William; Maugeri, Maurizio; Molini, Luca; Siccardi, Franco

    2017-04-01

    Recent studies show that highly localized and persistent back-building mesoscale convective systems represent one of the most dangerous flash-flood producing storms in the north-western Mediterranean area. Substantial warming of the Mediterranean Sea in recent decades raises concerns over possible increases in frequency or intensity of these types of events as increased atmospheric temperatures generally support increases in water vapor content. Analyses of available historical records do not provide a univocal answer, since these may be likely affected by a lack of detailed observations for older events. In the present study, 20th Century Reanalysis Project initial and boundary condition data in ensemble mode are used to address the feasibility of performing cloud-resolving simulations with 1 km horizontal grid spacing of a historic extreme event that occurred over Liguria (Italy): The San Fruttuoso case of 1915. The proposed approach focuses on the ensemble Weather Research and Forecasting (WRF) model runs, as they are the ones most likely to best simulate the event. It is found that these WRF runs generally do show wind and precipitation fields that are consistent with the occurrence of highly localized and persistent back-building mesoscale convective systems, although precipitation peak amounts are underestimated. Systematic small north-westward position errors with regard to the heaviest rain and strongest convergence areas imply that the Reanalysis members may not be adequately representing the amount of cool air over the Po Plain outflowing into the Liguria Sea through the Apennines gap. Regarding the role of historical data sources, this study shows that in addition to Reanalysis products, unconventional data, such as historical meteorological bulletins, newspapers and even photographs can be very valuable sources of knowledge in the reconstruction of past extreme events.

  13. Event Shape Sorting: selecting events with similar evolution

    Directory of Open Access Journals (Sweden)

    Tomášik Boris

    2017-01-01

    Full Text Available We present novel method for the organisation of events. The method is based on comparing event-by-event histograms of a chosen quantity Q that is measured for each particle in every event. The events are organised in such a way that those with similar shape of the Q-histograms end-up placed close to each other. We apply the method on histograms of azimuthal angle of the produced hadrons in ultrarelativsitic nuclear collisions. By selecting events with similar azimuthal shape of their hadron distribution one chooses events which are likely that they underwent similar evolution from the initial state to the freeze-out. Such events can more easily be compared to theoretical simulations where all conditions can be controlled. We illustrate the method on data simulated by the AMPT model.

  14. CDC Wonder Vaccine Adverse Event Reporting System

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Vaccine Adverse Event Reporting System (VAERS) online database on CDC WONDER provides counts and percentages of adverse event case reports after vaccination,...

  15. Simulation of overpressure events with a Laguna Verde model for the RELAP code to conditions of extended power up rate

    International Nuclear Information System (INIS)

    Rodriguez H, A.; Araiza M, E.; Fuentes M, L.; Ortiz V, J.

    2012-10-01

    In this work the main results of the simulation of overpressure events are presented using a model of the nuclear power plant of Laguna Verde developed for the RELAP/SCDAPSIM code. As starting point we have the conformation of a Laguna Verde model that represents a stationary state to similar conditions to the operation of the power station with Extended Power Up rate (EPU). The transitory of simulated pressure are compared with those documented in the Final Safety Analysis Report of Laguna Verde (FSAR). The results of the turbine shot transitory with and without by-pass of the main turbine are showed, and the event of closes of all the valves of main vapor isolation. A preliminary simulation was made and with base in the results some adjustments were made for the operation with EPU, taking into account the Operation Technical Specifications of the power station. The results of the final simulations were compared and analyzed with the content in the FSAR. The response of the power station to the transitory, reflected in the model for RELAP, was satisfactory. Finally, comments about the improvement of the model are included, for example, the response time of the protection and mitigation systems of the power station. (Author)

  16. Surface Management System Departure Event Data Analysis

    Science.gov (United States)

    Monroe, Gilena A.

    2010-01-01

    This paper presents a data analysis of the Surface Management System (SMS) performance of departure events, including push-back and runway departure events.The paper focuses on the detection performance, or the ability to detect departure events, as well as the prediction performance of SMS. The results detail a modest overall detection performance of push-back events and a significantly high overall detection performance of runway departure events. The overall detection performance of SMS for push-back events is approximately 55%.The overall detection performance of SMS for runway departure events nears 100%. This paper also presents the overall SMS prediction performance for runway departure events as well as the timeliness of the Aircraft Situation Display for Industry data source for SMS predictions.

  17. Intelligent Flood Adaptive Context-aware System: How Wireless Sensors Adapt their Configuration based on Environmental Phenomenon Events

    Directory of Open Access Journals (Sweden)

    Jie SUN

    2016-11-01

    Full Text Available Henceforth, new generations of Wireless Sensor Networks (WSN have to be able to adapt their behavior to collect, from the study phenomenon, quality data for long periods of time. We have thus proposed a new formalization for the design and the implementation of context-aware systems relying on a WSN for the data collection. To illustrate this proposal, we also present an environmental use case: the study of flood events in a watershed. In this paper, we detail the simulation tool that we have developed in order to implement our model. We simulate several scenarios of context-aware systems to monitor a watershed. The data used for the simulation are the observation data of the French Orgeval watershed.

  18. High Level Architecture Distributed Space System Simulation for Simulation Interoperability Standards Organization Simulation Smackdown

    Science.gov (United States)

    Li, Zuqun

    2011-01-01

    Modeling and Simulation plays a very important role in mission design. It not only reduces design cost, but also prepares astronauts for their mission tasks. The SISO Smackdown is a simulation event that facilitates modeling and simulation in academia. The scenario of this year s Smackdown was to simulate a lunar base supply mission. The mission objective was to transfer Earth supply cargo to a lunar base supply depot and retrieve He-3 to take back to Earth. Federates for this scenario include the environment federate, Earth-Moon transfer vehicle, lunar shuttle, lunar rover, supply depot, mobile ISRU plant, exploratory hopper, and communication satellite. These federates were built by teams from all around the world, including teams from MIT, JSC, University of Alabama in Huntsville, University of Bordeaux from France, and University of Genoa from Italy. This paper focuses on the lunar shuttle federate, which was programmed by the USRP intern team from NASA JSC. The shuttle was responsible for provide transportation between lunar orbit and the lunar surface. The lunar shuttle federate was built using the NASA standard simulation package called Trick, and it was extended with HLA functions using TrickHLA. HLA functions of the lunar shuttle federate include sending and receiving interaction, publishing and subscribing attributes, and packing and unpacking fixed record data. The dynamics model of the lunar shuttle was modeled with three degrees of freedom, and the state propagation was obeying the law of two body dynamics. The descending trajectory of the lunar shuttle was designed by first defining a unique descending orbit in 2D space, and then defining a unique orbit in 3D space with the assumption of a non-rotating moon. Finally this assumption was taken away to define the initial position of the lunar shuttle so that it will start descending a second after it joins the execution. VPN software from SonicWall was used to connect federates with RTI during testing

  19. Event management for large scale event-driven digital hardware spiking neural networks.

    Science.gov (United States)

    Caron, Louis-Charles; D'Haene, Michiel; Mailhot, Frédéric; Schrauwen, Benjamin; Rouat, Jean

    2013-09-01

    The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. DROpS: an object of learning in computer simulation of discrete events

    Directory of Open Access Journals (Sweden)

    Hugo Alves Silva Ribeiro

    2015-09-01

    Full Text Available This work presents the “Realistic Dynamics Of Simulated Operations” (DROpS, the name given to the dynamics using the “dropper” device as an object of teaching and learning. The objective is to present alternatives for professors teaching content related to simulation of discrete events to graduate students in production engineering. The aim is to enable students to develop skills related to data collection, modeling, statistical analysis, and interpretation of results. This dynamic has been developed and applied to the students by placing them in a situation analogous to a real industry, where various concepts related to computer simulation were discussed, allowing the students to put these concepts into practice in an interactive manner, thus facilitating learning

  1. Client and event driven data hub system at CDF

    International Nuclear Information System (INIS)

    Kilminster, Ben; McFarland, Kevin; Vaiciulis, Tony; Matsunaga, Hiroyuki; Shimojima, Makoto

    2001-01-01

    The Consumer-Server Logger (CSL) system at the Collider Detector at Fermilab is a client and event driven data hub capable of receiving physics events from multiple connections, and logging them to multiple streams while distributing them to multiple online analysis programs (consumers). Its multiple-partitioned design allows data flowing through different paths of the detector sub-systems to be processed separately. The CSL system, using a set of internal memory buffers and message queues mapped to the location of events within its programs, and running on an SGI 2200 Server, is able to process at least the required 20 MB/s of constant event logging (75 Hz of 250 KB events) while also filtering up to 10 MB/s to consumers requesting specific types of events

  2. The LCLS Timing Event System

    Energy Technology Data Exchange (ETDEWEB)

    Dusatko, John; Allison, S.; Browne, M.; Krejcik, P.; /SLAC

    2012-07-23

    The Linac Coherent Light Source requires precision timing trigger signals for various accelerator diagnostics and controls at SLAC-NAL. A new timing system has been developed that meets these requirements. This system is based on COTS hardware with a mixture of custom-designed units. An added challenge has been the requirement that the LCLS Timing System must co-exist and 'know' about the existing SLC Timing System. This paper describes the architecture, construction and performance of the LCLS timing event system.

  3. Tropical climate and vegetation cover during Heinrich event 1: Simulations with coupled climate vegetation models

    OpenAIRE

    Handiani, Dian Noor

    2012-01-01

    This study focuses on the climate and vegetation responses to abrupt climate change in the Northern Hemisphere during the last glacial period. Two abrupt climate events are explored: the abrupt cooling of the Heinrich event 1 (HE1), followed by the abrupt warming of the Bølling-Allerød interstadial (BA). These two events are simulated by perturbing the freshwater balance of the Atlantic Ocean, with the intention of altering the Atlantic Meridional Overturning Circulation (AMOC) and also of in...

  4. Adverse Event Reporting System (AERS)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Adverse Event Reporting System (AERS) is a computerized information database designed to support the FDA's post-marketing safety surveillance program for all...

  5. Reliability analysis with the simulator S.ESCAF of a very complex sequential system: the electrical power supply system of a nuclear reactor

    International Nuclear Information System (INIS)

    Blot, M.

    1987-06-01

    The reliability analysis of complex sequential systems, in which the order of arrival of the events must be taken into account, can be very difficult, because the use of the classical modelling technique of Markov diagrams leads to an important limitation on the number of components which can be handled. The desk-top apparatus S.ESCAF, which electronically simulates very closely the behaviour of the system being studied, and is very easy to use, even by a non specialist in electronics, allows one to avoid these inconveniences and to enlarge considerably the analysis possibilities. This paper shows the application of the S.ESCAF method to the electrical power supply system of a nuclear reactor. This system requires the simulation of more than forty components with about sixty events such as failure, repair and refusal to start. A comparison of times necessary to perform the analysis by these means and by other methods is described, and the advantages of S.ESCAF are presented

  6. Software failure events derivation and analysis by frame-based technique

    International Nuclear Information System (INIS)

    Huang, H.-W.; Shih, C.; Yih, Swu; Chen, M.-H.

    2007-01-01

    A frame-based technique, including physical frame, logical frame, and cognitive frame, was adopted to perform digital I and C failure events derivation and analysis for generic ABWR. The physical frame was structured with a modified PCTran-ABWR plant simulation code, which was extended and enhanced on the feedwater system, recirculation system, and steam line system. The logical model is structured with MATLAB, which was incorporated into PCTran-ABWR to improve the pressure control system, feedwater control system, recirculation control system, and automated power regulation control system. As a result, the software failure of these digital control systems can be properly simulated and analyzed. The cognitive frame was simulated by the operator awareness status in the scenarios. Moreover, via an internal characteristics tuning technique, the modified PCTran-ABWR can precisely reflect the characteristics of the power-core flow. Hence, in addition to the transient plots, the analysis results can then be demonstrated on the power-core flow map. A number of postulated I and C system software failure events were derived to achieve the dynamic analyses. The basis for event derivation includes the published classification for software anomalies, the digital I and C design data for ABWR, chapter 15 accident analysis of generic SAR, and the reported NPP I and C software failure events. The case study of this research includes: (1) the software CMF analysis for the major digital control systems; and (2) postulated ABWR digital I and C software failure events derivation from the actual happening of non-ABWR digital I and C software failure events, which were reported to LER of USNRC or IRS of IAEA. These events were analyzed by PCTran-ABWR. Conflicts among plant status, computer status, and human cognitive status are successfully identified. The operator might not easily recognize the abnormal condition, because the computer status seems to progress normally. However, a well

  7. Crash avoidance in response to challenging driving events: The roles of age, serialization, and driving simulator platform.

    Science.gov (United States)

    Bélanger, Alexandre; Gagnon, Sylvain; Stinchcombe, Arne

    2015-09-01

    We examined the crash avoidance behaviors of older and middle-aged drivers in reaction to six simulated challenging road events using two different driving simulator platforms. Thirty-five healthy adults aged 21-36 years old (M=28.9±3.96) and 35 healthy adults aged 65-83 years old (M=72.1±4.34) were tested using a mid-level simulator, and 27 adults aged 21-38 years old (M=28.6±6.63) and 27 healthy adults aged 65-83 years old (M=72.7±5.39) were tested on a low-cost desktop simulator. Participants completed a set of six challenging events varying in terms of the maneuvers required, avoiding space given, directional avoidance cues, and time pressure. Results indicated that older drivers showed higher crash risk when events required multiple synchronized reactions. In situations that required simultaneous use of steering and braking, older adults tended to crash significantly more frequently. As for middle-aged drivers, their crashes were attributable to faster driving speed. The same age-related driving patterns were observed across simulator platforms. Our findings support the hypothesis that older adults tend to react serially while engaging in cognitively challenging road maneuvers. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Evaluation of a proposed optimization method for discrete-event simulation models

    Directory of Open Access Journals (Sweden)

    Alexandre Ferreira de Pinho

    2012-12-01

    Full Text Available Optimization methods combined with computer-based simulation have been utilized in a wide range of manufacturing applications. However, in terms of current technology, these methods exhibit low performance levels which are only able to manipulate a single decision variable at a time. Thus, the objective of this article is to evaluate a proposed optimization method for discrete-event simulation models based on genetic algorithms which exhibits more efficiency in relation to computational time when compared to software packages on the market. It should be emphasized that the variable's response quality will not be altered; that is, the proposed method will maintain the solutions' effectiveness. Thus, the study draws a comparison between the proposed method and that of a simulation instrument already available on the market and has been examined in academic literature. Conclusions are presented, confirming the proposed optimization method's efficiency.

  9. Event reconstruction with MarlinReco at the International Linear ...

    Indian Academy of Sciences (India)

    event reconstruction system, based on the particle flow concept. Version 00-02 contains the following processors: Tracker hit digitisation: For the vertex system there are two different digitisers available. A simple digitiser translates simulated .... of showers in the calorimeter. The Fortran-based simulation and reconstruction.

  10. A global MHD simulation of an event with a quasi-steady northward IMF component

    Directory of Open Access Journals (Sweden)

    V. G. Merkin

    2007-06-01

    Full Text Available We show results of the Lyon-Fedder-Mobarry (LFM global MHD simulations of an event previously examined using Iridium spacecraft observations as well as DMSP and IMAGE FUV data. The event is chosen for the steady northward IMF sustained over a three-hour period during 16 July 2000. The Iridium observations showed very weak or absent Region 2 currents in the ionosphere, which makes the event favorable for global MHD modeling. Here we are interested in examining the model's performace during weak magnetospheric forcing, in particular, its ability to reproduce gross signatures of the ionospheric currents and convection pattern and energy deposition in the ionosphere both due to the Poynting flux and particle precipitation. We compare the ionospheric field-aligned current and electric potential patterns with those recovered from Iridium and DMSP observations, respectively. In addition, DMSP magnetometer data are used for comparisons of ionospheric magnetic perturbations. The electromagnetic energy flux is compared with Iridium-inferred values, while IMAGE FUV observations are utilized to verify the simulated particle energy flux.

  11. Estimation of functional failure probability of passive systems based on subset simulation method

    International Nuclear Information System (INIS)

    Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)

  12. INTRODUCTION: New trends in simulating colloids and self-assembling systems New trends in simulating colloids and self-assembling systems

    Science.gov (United States)

    Foffi, Giuseppe; Kahl, Gerhard

    2010-03-01

    Interest in colloidal physics has grown at an incredible pace over the past few decades. To a great extent this remarkable development is due to the fact that colloidal systems are highly relevant in everyday applications as well as in basic research. On the one hand, colloids are ubiquitous in our daily lives and a deeper understanding of their physical properties is therefore highly relevant in applied areas ranging from biomedicine over food sciences to technology. On the other hand, a seemingly unlimited freedom in designing colloidal particles with desired properties in combination with new, low-cost experimental techniques, make them—compared to hard matter systems—considerably more attractive for a wide range of basic investigations. All these investigations are carried out with close cooperation between experimentalists, theoreticians and simulators, reuniting thereby, on a highly interdisciplinary level, physicists, chemists, and biologists. In an effort to give credit to some of these new developments in colloidal physics, two proposals for workshops were submitted independently to CECAM in the fall of 2008; both of them were approved and organized as consecutive events. This decision undoubtedly had many practical and organizational advantages. Furthermore, and from the scientific point of view more relevant, the organizers could welcome in total 69 participants, presenting 42 oral and 21 poster contributions. We are proud to say that nearly all the colleagues that we contacted at submission time accepted our invitation, and we are happy to say that the number of additional participants was rather high. Due to the fact that both workshops took place within one week, quite a few participants, registered originally for one of these meetings, extended their participation to the other event also. In total, 23 contributions have been submitted to this special issue, which cover the main scientific topics addressed in these workshops. We consider this

  13. Event-Triggered Distributed Approximate Optimal State and Output Control of Affine Nonlinear Interconnected Systems.

    Science.gov (United States)

    Narayanan, Vignesh; Jagannathan, Sarangapani

    2017-06-08

    This paper presents an approximate optimal distributed control scheme for a known interconnected system composed of input affine nonlinear subsystems using event-triggered state and output feedback via a novel hybrid learning scheme. First, the cost function for the overall system is redefined as the sum of cost functions of individual subsystems. A distributed optimal control policy for the interconnected system is developed using the optimal value function of each subsystem. To generate the optimal control policy, forward-in-time, neural networks are employed to reconstruct the unknown optimal value function at each subsystem online. In order to retain the advantages of event-triggered feedback for an adaptive optimal controller, a novel hybrid learning scheme is proposed to reduce the convergence time for the learning algorithm. The development is based on the observation that, in the event-triggered feedback, the sampling instants are dynamic and results in variable interevent time. To relax the requirement of entire state measurements, an extended nonlinear observer is designed at each subsystem to recover the system internal states from the measurable feedback. Using a Lyapunov-based analysis, it is demonstrated that the system states and the observer errors remain locally uniformly ultimately bounded and the control policy converges to a neighborhood of the optimal policy. Simulation results are presented to demonstrate the performance of the developed controller.

  14. Penelope-2006: a code system for Monte Carlo simulation of electron and photon transport

    International Nuclear Information System (INIS)

    2006-01-01

    The computer code system PENELOPE (version 2006) performs Monte Carlo simulation of coupled electron-photon transport in arbitrary materials for a wide energy range, from a few hundred eV to about 1 GeV. Photon transport is simulated by means of the standard, detailed simulation scheme. Electron and positron histories are generated on the basis of a mixed procedure, which combines detailed simulation of hard events with condensed simulation of soft interactions. A geometry package called PENGEOM permits the generation of random electron-photon showers in material systems consisting of homogeneous bodies limited by quadric surfaces, i.e. planes, spheres, cylinders, etc. This report is intended not only to serve as a manual of the PENELOPE code system, but also to provide the user with the necessary information to understand the details of the Monte Carlo algorithm. These proceedings contain the corresponding manual and teaching notes of the PENELOPE-2006 workshop and training course, held on 4-7 July 2006 in Barcelona, Spain. (author)

  15. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  16. Review on modeling and simulation of interdependent critical infrastructure systems

    International Nuclear Information System (INIS)

    Ouyang, Min

    2014-01-01

    Modern societies are becoming increasingly dependent on critical infrastructure systems (CISs) to provide essential services that support economic prosperity, governance, and quality of life. These systems are not alone but interdependent at multiple levels to enhance their overall performance. However, recent worldwide events such as the 9/11 terrorist attack, Gulf Coast hurricanes, the Chile and Japanese earthquakes, and even heat waves have highlighted that interdependencies among CISs increase the potential for cascading failures and amplify the impact of both large and small scale initial failures into events of catastrophic proportions. To better understand CISs to support planning, maintenance and emergency decision making, modeling and simulation of interdependencies across CISs has recently become a key field of study. This paper reviews the studies in the field and broadly groups the existing modeling and simulation approaches into six types: empirical approaches, agent based approaches, system dynamics based approaches, economic theory based approaches, network based approaches, and others. Different studies for each type of the approaches are categorized and reviewed in terms of fundamental principles, such as research focus, modeling rationale, and the analysis method, while different types of approaches are further compared according to several criteria, such as the notion of resilience. Finally, this paper offers future research directions and identifies critical challenges in the field. - Highlights: • Modeling approaches on interdependent critical infrastructure systems are reviewed. • I mainly review empirical, agent-based, system-dynamics, economic, network approaches. • Studies by each approach are sorted out in terms of fundamental principles. • Different approaches are further compared with resilience as the main criterion

  17. PWR system simulation and parameter estimation with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Akkurt, Hatice; Colak, Uener E-mail: uc@nuke.hacettepe.edu.tr

    2002-11-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within {+-}0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected.

  18. PWR system simulation and parameter estimation with neural networks

    International Nuclear Information System (INIS)

    Akkurt, Hatice; Colak, Uener

    2002-01-01

    A detailed nonlinear model for a typical PWR system has been considered for the development of simulation software. Each component in the system has been represented by appropriate differential equations. The SCILAB software was used for solving nonlinear equations to simulate steady-state and transient operational conditions. Overall system has been constructed by connecting individual components to each other. The validity of models for individual components and overall system has been verified. The system response against given transients have been analyzed. A neural network has been utilized to estimate system parameters during transients. Different transients have been imposed in training and prediction stages with neural networks. Reactor power and system reactivity during the transient event have been predicted by the neural network. Results show that neural networks estimations are in good agreement with the calculated response of the reactor system. The maximum errors are within ±0.254% for power and between -0.146 and 0.353% for reactivity prediction cases. Steam generator parameters, pressure and water level, are also successfully predicted by the neural network employed in this study. The noise imposed on the input parameters of the neural network deteriorates the power estimation capability whereas the reactivity estimation capability is not significantly affected

  19. Innovative simulation systems

    CERN Document Server

    Jędrasiak, Karol

    2016-01-01

    This monograph provides comprehensive guidelines on the current and future trends of innovative simulation systems. In particular, their important components, such as augmented reality and unmanned vehicles are presented. The book consists of three parts. Each part presents good practices, new methods, concepts of systems and new algorithms. Presented challenges and solutions are the results of research and conducted by the contributing authors. The book describes and evaluates the current state of knowledge in the field of innovative simulation systems. Throughout the chapters there are presented current issues and concepts of systems, technology, equipment, tools, research challenges and current, past and future applications of simulation systems. The book is addressed to a wide audience: academic staff, representatives of research institutions, employees of companies and government agencies as well as students and graduates of technical universities in the country and abroad. The book can be a valuable sou...

  20. Specs: Simulation Program for Electronic Circuits and Systems

    Science.gov (United States)

    de Geus, Aart Jan

    Simulation tools are central to the development and verification of very large scale integrated circuits. Circuit simulation has been used for over two decades to verify the behavior of designs. Recently the introduction of switch-level simulators which model MOS transistors in terms of switches has helped to overcome the long runtimes associated with full circuit simulation. Used strictly for functional verification and fault simulation, switch -level simulation can only give very rough estimates of the timing of a circuit. In this dissertation an approach is presented which adds a timing capability to switch-level simulators at relatively little extra CPU cost. A new logic state concept is introduced which consists of a set of discrete voltage steps. Signals are known only in terms of these states thus allowing all current computations to be table driven. State changes are scheduled in the same fashion as in the case of gate-level simulators, making the simulator event-driven. The simulator is of mixed-mode nature in that it can model portions of a design at either the gate or transistor level. In order to represent the "unknown" state, a signal consists of both an upper and a lower bound defining a signal envelope. Both bounds are expressed in terms of states. In order to speed up the simulation, MOS networks are subdivided in small pull-up and pull-down transistor configurations that can be preanalysed and prepared for fast evaluation during the simulation. These concepts have been implemented in the program SPECS (Simulation Program For Electronic Circuits and Systems) and examples of simulations are given.

  1. Development of BWR [boiling water reactor] and PWR [pressurized water reactor] event descriptions for nuclear facility simulator training

    International Nuclear Information System (INIS)

    Carter, R.J.; Bovell, C.R.

    1987-01-01

    A number of tools that can aid nuclear facility training developers in designing realistic simulator scenarios have been developed. This paper describes each of the tools, i.e., event lists, events-by-competencies matrices, and event descriptions, and illustrates how the tools can be used to construct scenarios

  2. U.S. Marine Corps Communication-Electronics School Training Process: Discrete-Event Simulation and Lean Options

    National Research Council Canada - National Science Library

    Neu, Charles R; Davenport, Jon; Smith, William R

    2007-01-01

    This paper uses discrete-event simulation modeling, inventory-reduction, and process improvement concepts to identify and analyze possibilities for improving the training continuum at the Marine Corps...

  3. 3-D topological signatures and a new discrimination method for single-electron events and 0νββ events in CdZnTe: A Monte Carlo simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Ming; Li, Teng-Lin; Cang, Ji-Rong [Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Zeng, Zhi, E-mail: zengzhi@tsinghua.edu.cn [Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Fu, Jian-Qiang; Zeng, Wei-He; Cheng, Jian-Ping; Ma, Hao; Liu, Yi-Nong [Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China)

    2017-06-21

    In neutrinoless double beta (0νββ) decay experiments, the diversity of topological signatures of different particles provides an important tool to distinguish double beta events from background events and reduce background rates. Aiming at suppressing the single-electron backgrounds which are most challenging, several groups have established Monte Carlo simulation packages to study the topological characteristics of single-electron events and 0νββ events and develop methods to differentiate them. In this paper, applying the knowledge of graph theory, a new topological signature called REF track (Refined Energy-Filtered track) is proposed and proven to be an accurate approximation of the real particle trajectory. Based on the analysis of the energy depositions along the REF track of single-electron events and 0νββ events, the REF energy deposition models for both events are proposed to indicate the significant differences between them. With these differences, this paper presents a new discrimination method, which, in the Monte Carlo simulation, achieved a single-electron rejection factor of 93.8±0.3 (stat.)% as well as a 0νββ efficiency of 85.6±0.4 (stat.)% with optimized parameters in CdZnTe.

  4. Using relational databases to collect and store discrete-event simulation results

    DEFF Research Database (Denmark)

    Poderys, Justas; Soler, José

    2016-01-01

    , export the results to a data carrier file and then process the results stored in a file using the data processing software. In this work, we propose to save the simulation results directly from a simulation tool to a computer database. We implemented a link between the discrete-even simulation tool...... and the database and performed performance evaluation of 3 different open-source database systems. We show, that with a right choice of a database system, simulation results can be collected and exported up to 2.67 times faster, and use 1.78 times less disk space when compared to using simulation software built...

  5. A framework for the system-of-systems analysis of the risk for a safety-critical plant exposed to external events

    International Nuclear Information System (INIS)

    Zio, E.; Ferrario, E.

    2013-01-01

    We consider a critical plant exposed to risk from external events. We propose an original framework of analysis, which extends the boundaries of the study to the interdependent infrastructures which support the plant. For the purpose of clearly illustrating the conceptual framework of system-of-systems analysis, we work out a case study of seismic risk for a nuclear power plant embedded in the connected power and water distribution, and transportation networks which support its operation. The technical details of the systems considered (including the nuclear power plant) are highly simplified, in order to preserve the purpose of illustrating the conceptual, methodological framework of analysis. Yet, as an example of the approaches that can be used to perform the analysis within the proposed framework, we consider the Muir Web as system analysis tool to build the system-of-systems model and Monte Carlo simulation for the quantitative evaluation of the model. The numerical exercise, albeit performed on a simplified case study, serves the purpose of showing the opportunity of accounting for the contribution of the interdependent infrastructure systems to the safety of a critical plant. This is relevant as it can lead to considerations with respect to the decision making related to safety critical-issues. -- Highlights: ► We consider a critical plant exposed to risk from external events. ► We consider also the interdependent infrastructures that support the plant. ► We use Muir Web as system analysis tool to build the system-of-systems model. ► We use Monte Carlo simulation for the quantitative evaluation of the model. ► We find that the interdependent infrastructures should be considered as they can be a support for the critical plant safety

  6. Operational analysis and improvement of a spent nuclear fuel handling and treatment facility using discrete event simulation

    International Nuclear Information System (INIS)

    Garcia, H.E.

    2000-01-01

    Spent nuclear fuel handling and treatment often require facilities with a high level of operational complexity. Simulation models can reveal undesirable characteristics and production problems before they become readily apparent during system operations. The value of this approach is illustrated here through an operational study, using discrete event modeling techniques, to analyze the Fuel Conditioning Facility at Argonne National Laboratory and to identify enhanced nuclear waste treatment configurations. The modeling approach and results of what-if studies are discussed. An example on how to improve productivity is presented.

  7. Development of advanced automatic control system for nuclear ship. 2. Perfect automatic operation after reactor scram events

    International Nuclear Information System (INIS)

    Yabuuchi, Noriaki; Nakazawa, Toshio; Takahashi, Hiroki; Shimazaki, Junya; Hoshi, Tsutao

    1997-11-01

    An automatic operation system has been developed for the purpose of realizing a perfect automatic plant operation after reactor scram events. The goal of the automatic operation after a reactor scram event is to bring the reactor hot stand-by condition automatically. The basic functions of this system are as follows; to monitor actions of the equipments of safety actions after a reactor scram, to control necessary control equipments to bring a reactor to a hot stand-by condition automatically, and to energize a decay heat removal system. The performance evaluation on this system was carried out by comparing the results using to Nuclear Ship Engineering Simulation System (NESSY) and the those measured in the scram test of the nuclear ship 'Mutsu'. As the result, it was showed that this system had the sufficient performance to bring a reactor to a hot syand-by condition quickly and safety. (author)

  8. Development of advanced automatic control system for nuclear ship. 2. Perfect automatic operation after reactor scram events

    Energy Technology Data Exchange (ETDEWEB)

    Yabuuchi, Noriaki; Nakazawa, Toshio; Takahashi, Hiroki; Shimazaki, Junya; Hoshi, Tsutao [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-11-01

    An automatic operation system has been developed for the purpose of realizing a perfect automatic plant operation after reactor scram events. The goal of the automatic operation after a reactor scram event is to bring the reactor hot stand-by condition automatically. The basic functions of this system are as follows; to monitor actions of the equipments of safety actions after a reactor scram, to control necessary control equipments to bring a reactor to a hot stand-by condition automatically, and to energize a decay heat removal system. The performance evaluation on this system was carried out by comparing the results using to Nuclear Ship Engineering Simulation System (NESSY) and the those measured in the scram test of the nuclear ship `Mutsu`. As the result, it was showed that this system had the sufficient performance to bring a reactor to a hot syand-by condition quickly and safety. (author)

  9. Confirmatory simulation of safety and operational transients in LMFBR systems

    International Nuclear Information System (INIS)

    Guppy, J.G.; Agrawal, A.K.

    1978-01-01

    Operational and safety transients (anticipated, unlikely, or extremely unlikely) that may originate anywhere in a liquid-metal fast breeder reactor (LMFBR) system must be adequately simulated to assist in safety evaluation and plant design efforts. An advanced thermohydraulic transient code, the Super System Code (SSC), is described that may be used for confirmatory safety evaluations of plant-wide events, such as assurance of adequate decay heat removal capability under natural circulation conditions. Results obtained with SSC illustrating the degree of modeling detail present in the code as well as the computing efficiency are presented. A version of the SSC code, SSC-L, applicable to any loop-type LMFBR design, has been developed at Brookhaven. The scope of SSC-L is to enable the simulation of all plant-wide transients covered by Plant Protection System (PPS) action, including sodium pipe rupture and coastdown to natural circulation conditions. The computations are stopped when loss of core integrity (i.e., clad melting temperature exceeded) is indicated

  10. The time dependent propensity function for acceleration of spatial stochastic simulation of reaction–diffusion systems

    International Nuclear Information System (INIS)

    Fu, Jin; Wu, Sheng; Li, Hong; Petzold, Linda R.

    2014-01-01

    The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy

  11. Discrete event simulation and virtual reality use in industry: new opportunities and future trends

    OpenAIRE

    Turner, Christopher; Hutabarat, Windo; Oyekan, John; Tiwari, Ashutosh

    2016-01-01

    This paper reviews the area of combined discrete event simulation (DES) and virtual reality (VR) use within industry. While establishing a state of the art for progress in this area, this paper makes the case for VR DES as the vehicle of choice for complex data analysis through interactive simulation models, highlighting both its advantages and current limitations. This paper reviews active research topics such as VR and DES real-time integration, communication protocols,...

  12. BEEC: An event generator for simulating the Bc meson production at an e+e- collider

    Science.gov (United States)

    Yang, Zhi; Wu, Xing-Gang; Wang, Xian-You

    2013-12-01

    The Bc meson is a doubly heavy quark-antiquark bound state and carries flavors explicitly, which provides a fruitful laboratory for testing potential models and understanding the weak decay mechanisms for heavy flavors. In view of the prospects in Bc physics at the hadronic colliders such as Tevatron and LHC, Bc physics is attracting more and more attention. It has been shown that a high luminosity e+e- collider running around the Z0-peak is also helpful for studying the properties of Bc meson and has its own advantages. For this purpose, we write down an event generator for simulating Bc meson production through e+e- annihilation according to relevant publications. We name it BEEC, in which the color-singlet S-wave and P-wave (cb¯)-quarkonium states together with the color-octet S-wave (cb¯)-quarkonium states can be generated. BEEC can also be adopted to generate the similar charmonium and bottomonium states via the semi-exclusive channels e++e-→|(QQ¯)[n]>+Q+Q¯ with Q=b and c respectively. To increase the simulation efficiency, we simplify the amplitude as compact as possible by using the improved trace technology. BEEC is a Fortran program written in a PYTHIA-compatible format and is written in a modular structure, one may apply it to various situations or experimental environments conveniently by using the GNU C compiler make. A method to improve the efficiency of generating unweighted events within PYTHIA environment is proposed. Moreover, BEEC will generate a standard Les Houches Event data file that contains useful information of the meson and its accompanying partons, which can be conveniently imported into PYTHIA to do further hadronization and decay simulation. Catalogue identifier: AEQC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQC_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in

  13. Analysis of system and of course of events

    International Nuclear Information System (INIS)

    Hoertner, H.; Kersting, E.J.; Puetter, B.M.

    1986-01-01

    The analysis of the system and of the course of events is used to determine the frequency of core melt-out accidents and to describe the safety-related boundary conditions of appropriate accidents. The lecture is concerned with the effect of system changes in the reference plant and the effect of triggering events not assessed in detail or not sufficiently assessed in detail in phase A of the German Risk Study on the frequency of core melt-out accidents, the minimum requirements for system functions for controlling triggering events, i.e. to prevent core melt-out accidents, the reliability data important for reliability investigations and frequency assessments. (orig./DG) [de

  14. Train-to-Ground communications of a Train Control and Monitoring Systems: A simulation platform modelling approach

    DEFF Research Database (Denmark)

    Bouaziz, Maha; Yan, Ying; Kassab, Mohamed

    2018-01-01

    wireless technologies, e.g. Wi-Fi and LTE. Different T2G scenarios are defined in order to evaluate the performances of the Mobile Communication Gateway (managing train communications) and Quality of Services (QoS) offered to TCMS applications in the context of various environments (regular train lines......Under the SAFE4RAIL project, we are developing a simulation platform based on a discrete-events network simulator. This platform models the Train-to-Ground (T2G) link in the framework of a system-level simulation of Train Control Management System (TCMS). The modelled T2G link is based on existing...

  15. Validating numerical simulations of snow avalanches using dendrochronology: the Cerro Ventana event in Northern Patagonia, Argentina

    Directory of Open Access Journals (Sweden)

    A. Casteller

    2008-05-01

    Full Text Available The damage caused by snow avalanches to property and human lives is underestimated in many regions around the world, especially where this natural hazard remains poorly documented. One such region is the Argentinean Andes, where numerous settlements are threatened almost every winter by large snow avalanches. On 1 September 2002, the largest tragedy in the history of Argentinean mountaineering took place at Cerro Ventana, Northern Patagonia: nine persons were killed and seven others injured by a snow avalanche. In this paper, we combine both numerical modeling and dendrochronological investigations to reconstruct this event. Using information released by local governmental authorities and compiled in the field, the avalanche event was numerically simulated using the avalanche dynamics programs AVAL-1D and RAMMS. Avalanche characteristics, such as extent and date were determined using dendrochronological techniques. Model simulation results were compared with documentary and tree-ring evidences for the 2002 event. Our results show a good agreement between the simulated projection of the avalanche and its reconstructed extent using tree-ring records. Differences between the observed and the simulated avalanche, principally related to the snow height deposition in the run-out zone, are mostly attributed to the low resolution of the digital elevation model used to represent the valley topography. The main contributions of this study are (1 to provide the first calibration of numerical avalanche models for the Patagonian Andes and (2 to highlight the potential of Nothofagus pumilio tree-ring records to reconstruct past snow-avalanche events in time and space. Future research should focus on testing this combined approach in other forested regions of the Andes.

  16. Discrete Event System Based Pyroprocessing Modeling and Simulation: Oxide Reduction

    International Nuclear Information System (INIS)

    Lee, H. J.; Ko, W. I.; Choi, S. Y.; Kim, S. K.; Hur, J. M.; Choi, E. Y.; Im, H. S.; Park, K. I.; Kim, I. T.

    2014-01-01

    Dynamic changes according to the batch operation cannot be predicted in an equilibrium material flow. This study began to build a dynamic material balance model based on the previously developed pyroprocessing flowsheet. As a mid- and long-term research, an integrated pyroprocessing simulator is being developed at the Korea Atomic Energy Research Institute (KAERI) to cope with a review on the technical feasibility, safeguards assessment, conceptual design of facility, and economic feasibility evaluation. The most fundamental thing in such a simulator development is to establish the dynamic material flow framework. This study focused on the operation modeling of pyroprocessing to implement a dynamic material flow. As a case study, oxide reduction was investigated in terms of a dynamic material flow. DES based modeling was applied to build a pyroprocessing operation model. A dynamic material flow as the basic framework for an integrated pyroprocessing was successfully implemented through ExtendSim's internal database and item blocks. Complex operation logic behavior was verified, for example, an oxide reduction process in terms of dynamic material flow. Compared to the equilibrium material flow, a model-based dynamic material flow provides such detailed information that a careful analysis of every batch is necessary to confirm the dynamic material balance results. With the default scenario of oxide reduction, the batch mass balance was verified in comparison with a one-year equilibrium mass balance. This study is still under progress with a mid-and long-term goal, the development of a multi-purpose pyroprocessing simulator that is able to cope with safeguards assessment, economic feasibility, technical evaluation, conceptual design, and support of licensing for a future pyroprocessing facility

  17. Control of Discrete-Event Systems Automata and Petri Net Perspectives

    CERN Document Server

    Silva, Manuel; Schuppen, Jan

    2013-01-01

    Control of Discrete-event Systems provides a survey of the most important topics in the discrete-event systems theory with particular focus on finite-state automata, Petri nets and max-plus algebra. Coverage ranges from introductory material on the basic notions and definitions of discrete-event systems to more recent results. Special attention is given to results on supervisory control, state estimation and fault diagnosis of both centralized and distributed/decentralized systems developed in the framework of the Distributed Supervisory Control of Large Plants (DISC) project. Later parts of the text are devoted to the study of congested systems though fluidization, an over approximation allowing a much more efficient study of observation and control problems of timed Petri nets. Finally, the max-plus algebraic approach to the analysis and control of choice-free systems is also considered. Control of Discrete-event Systems provides an introduction to discrete-event systems for readers that are not familiar wi...

  18. Soil organic carbon loss and selective transportation under field simulated rainfall events.

    Science.gov (United States)

    Nie, Xiaodong; Li, Zhongwu; Huang, Jinquan; Huang, Bin; Zhang, Yan; Ma, Wenming; Hu, Yanbiao; Zeng, Guangming

    2014-01-01

    The study on the lateral movement of soil organic carbon (SOC) during soil erosion can improve the understanding of global carbon budget. Simulated rainfall experiments on small field plots were conducted to investigate the SOC lateral movement under different rainfall intensities and tillage practices. Two rainfall intensities (High intensity (HI) and Low intensity (LI)) and two tillage practices (No tillage (NT) and Conventional tillage (CT)) were maintained on three plots (2 m width × 5 m length): HI-NT, LI-NT and LI-CT. The rainfall lasted 60 minutes after the runoff generated, the sediment yield and runoff volume were measured and sampled at 6-min intervals. SOC concentration of sediment and runoff as well as the sediment particle size distribution were measured. The results showed that most of the eroded organic carbon (OC) was lost in form of sediment-bound organic carbon in all events. The amount of lost SOC in LI-NT event was 12.76 times greater than that in LI-CT event, whereas this measure in HI-NT event was 3.25 times greater than that in LI-NT event. These results suggest that conventional tillage as well as lower rainfall intensity can reduce the amount of lost SOC during short-term soil erosion. Meanwhile, the eroded sediment in all events was enriched in OC, and higher enrichment ratio of OC (ERoc) in sediment was observed in LI events than that in HI event, whereas similar ERoc curves were found in LI-CT and LI-NT events. Furthermore, significant correlations between ERoc and different size sediment particles were only observed in HI-NT event. This indicates that the enrichment of OC is dependent on the erosion process, and the specific enrichment mechanisms with respect to different erosion processes should be studied in future.

  19. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    Science.gov (United States)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  20. Numerical Simulations of Slow Stick Slip Events with PFC, a DEM Based Code

    Science.gov (United States)

    Ye, S. H.; Young, R. P.

    2017-12-01

    Nonvolcanic tremors around subduction zone have become a fascinating subject in seismology in recent years. Previous studies have shown that the nonvolcanic tremor beneath western Shikoku is composed of low frequency seismic waves overlapping each other. This finding provides direct link between tremor and slow earthquakes. Slow stick slip events are considered to be laboratory scaled slow earthquakes. Slow stick slip events are traditionally studied with direct shear or double direct shear experiment setup, in which the sliding velocity can be controlled to model a range of fast and slow stick slips. In this study, a PFC* model based on double direct shear is presented, with a central block clamped by two side blocks. The gauge layers between the central and side blocks are modelled as discrete fracture networks with smooth joint bonds between pairs of discrete elements. In addition, a second model is presented in this study. This model consists of a cylindrical sample subjected to triaxial stress. Similar to the previous model, a weak gauge layer at a 45 degrees is added into the sample, on which shear slipping is allowed. Several different simulations are conducted on this sample. While the confining stress is maintained at the same level in different simulations, the axial loading rate (displacement rate) varies. By varying the displacement rate, a range of slipping behaviour, from stick slip to slow stick slip are observed based on the stress-strain relationship. Currently, the stick slip and slow stick slip events are strictly observed based on the stress-strain relationship. In the future, we hope to monitor the displacement and velocity of the balls surrounding the gauge layer as a function of time, so as to generate a synthetic seismogram. This will allow us to extract seismic waveforms and potentially simulate the tremor-like waves found around subduction zones. *Particle flow code, a discrete element method based numerical simulation code developed by

  1. Simulation of long-range transport aerosols from the Asian Continent to Taiwan by a southward Asian high-pressure system.

    Science.gov (United States)

    Chuang, Ming-Tung; Fu, Joshua S; Jang, Carey J; Chan, Chang-Chuan; Ni, Pei-Cheng; Lee, Chung-Te

    2008-11-15

    Aerosol is frequently transported by a southward high-pressure system from the Asian Continent to Taiwan and had been recorded a 100% increase in mass level compared to non-event days from 2002 to 2005. During this time period, PM2.5 sulfate was found to increase as high as 155% on event days as compared to non-event days. In this study, Asian emission estimations, Taiwan Emission Database System (TEDS), and meteorological simulation results from the fifth-generation Mesoscale Model (MM5) were used as inputs for the Community Multiscale Air Quality (CMAQ) model to simulate a long-range transport of PM2.5 event in a southward high-pressure system from the Asian Continent to Taiwan. The simulation on aerosol mass level and the associated aerosol components were found within a reasonable accuracy. During the transport process, the percentage of semi-volatile PM2.5 organic carbon in PM2.5 plume only slightly decreased from 22-24% in Shanghai to 21% near Taiwan. However, the percentage of PM2.5 nitrate in PM2.5 decreased from 16-25% to 1%. In contrast, the percentage of PM2.5 sulfate in PM2.5 increased from 16-19% to 35%. It is interesting to note that the percentage of PM2.5 ammonium and PM2.5 elemental carbon in PM2.5 remained nearly constant. Simulation results revealed that transported pollutants dominate the air quality in Taipei when the southward high-pressure system moved to Taiwan. Such condition demonstrates the dynamic chemical transformation of pollutants during the transport process from continental origin over the sea area and to the downwind land.

  2. Digital simulation of power electronic systems

    International Nuclear Information System (INIS)

    Mehring, P.; Jentsch, W.; John, G.; Kraemer, D.

    1981-01-01

    The following paper contains the final report on the NETSIM-Project. The purpose of this project is to develop a special digital simulation system, which could serve as a base for routine application of simulation in planning and development of power electronic systems. The project is realized in two steps. First a basic network analysis system is established. With this system the basic models and methods in treating power electronic networks could be probed. The resulting system is then integrated into a general digital simulation system for continous systems (CSSL-System). This integrated simulation system allows for convenient modeling and simulation of power electronic systems. (orig.) [de

  3. Performance assessment of topologically diverse power systems subjected to hurricane events

    International Nuclear Information System (INIS)

    Winkler, James; Duenas-Osorio, Leonardo; Stein, Robert; Subramanian, Devika

    2010-01-01

    Large tropical cyclones cause severe damage to major cities along the United States Gulf Coast annually. A diverse collection of engineering and statistical models are currently used to estimate the geographical distribution of power outage probabilities stemming from these hurricanes to aid in storm preparedness and recovery efforts. Graph theoretic studies of power networks have separately attempted to link abstract network topology to transmission and distribution system reliability. However, few works have employed both techniques to unravel the intimate connection between network damage arising from storms, topology, and system reliability. This investigation presents a new methodology combining hurricane damage predictions and topological assessment to characterize the impact of hurricanes upon power system reliability. Component fragility models are applied to predict failure probability for individual transmission and distribution power network elements simultaneously. The damage model is calibrated using power network component failure data for Harris County, TX, USA caused by Hurricane Ike in September of 2008, resulting in a mean outage prediction error of 15.59% and low standard deviation. Simulated hurricane events are then applied to measure the hurricane reliability of three topologically distinct transmission networks. The rate of system performance decline is shown to depend on their topological structure. Reliability is found to correlate directly with topological features, such as network meshedness, centrality, and clustering, and the compact irregular ring mesh topology is identified as particularly favorable, which can influence regional lifeline policy for retrofit and hardening activities to withstand hurricane events.

  4. An expert system for prevention of abnormal event recurrence

    International Nuclear Information System (INIS)

    Nishiyama, Takuya

    1990-01-01

    A huge amount of information related to abnormal events occurring in nuclear power plants in Japan and abroad is collected and accumulated in the Nuclear Information Center at CRIEPI. This information contains a variety of knowledge which may be useful for prevention of similar trouble. An expert system named, 'Consultation System for Prevention of Abnormal-Event Recurrence (CSPAR) is being developed with the objective of preventing recurrence of similar abnormal events by offering an effective means of utilizing such knowledge. This paper presents the key points in designing and constructing the system, the system functional outline, and some demonstration examples. (author)

  5. Assessment of the Weather Research and Forecasting (WRF) model for simulation of extreme rainfall events in the upper Ganga Basin

    Science.gov (United States)

    Chawla, Ila; Osuri, Krishna K.; Mujumdar, Pradeep P.; Niyogi, Dev

    2018-02-01

    Reliable estimates of extreme rainfall events are necessary for an accurate prediction of floods. Most of the global rainfall products are available at a coarse resolution, rendering them less desirable for extreme rainfall analysis. Therefore, regional mesoscale models such as the advanced research version of the Weather Research and Forecasting (WRF) model are often used to provide rainfall estimates at fine grid spacing. Modelling heavy rainfall events is an enduring challenge, as such events depend on multi-scale interactions, and the model configurations such as grid spacing, physical parameterization and initialization. With this background, the WRF model is implemented in this study to investigate the impact of different processes on extreme rainfall simulation, by considering a representative event that occurred during 15-18 June 2013 over the Ganga Basin in India, which is located at the foothills of the Himalayas. This event is simulated with ensembles involving four different microphysics (MP), two cumulus (CU) parameterizations, two planetary boundary layers (PBLs) and two land surface physics options, as well as different resolutions (grid spacing) within the WRF model. The simulated rainfall is evaluated against the observations from 18 rain gauges and the Tropical Rainfall Measuring Mission Multi-Satellite Precipitation Analysis (TMPA) 3B42RT version 7 data. From the analysis, it should be noted that the choice of MP scheme influences the spatial pattern of rainfall, while the choice of PBL and CU parameterizations influences the magnitude of rainfall in the model simulations. Further, the WRF run with Goddard MP, Mellor-Yamada-Janjic PBL and Betts-Miller-Janjic CU scheme is found to perform best in simulating this heavy rain event. The selected configuration is evaluated for several heavy to extremely heavy rainfall events that occurred across different months of the monsoon season in the region. The model performance improved through incorporation

  6. System risk evolution analysis and risk critical event identification based on event sequence diagram

    International Nuclear Information System (INIS)

    Luo, Pengcheng; Hu, Yang

    2013-01-01

    During system operation, the environmental, operational and usage conditions are time-varying, which causes the fluctuations of the system state variables (SSVs). These fluctuations change the accidents’ probabilities and then result in the system risk evolution (SRE). This inherent relation makes it feasible to realize risk control by monitoring the SSVs in real time, herein, the quantitative analysis of SRE is essential. Besides, some events in the process of SRE are critical to system risk, because they act like the “demarcative points” of safety and accident, and this characteristic makes each of them a key point of risk control. Therefore, analysis of SRE and identification of risk critical events (RCEs) are remarkably meaningful to ensure the system to operate safely. In this context, an event sequence diagram (ESD) based method of SRE analysis and the related Monte Carlo solution are presented; RCE and risk sensitive variable (RSV) are defined, and the corresponding identification methods are also proposed. Finally, the proposed approaches are exemplified with an accident scenario of an aircraft getting into the icing region

  7. A general sensitivity theory for simulations of nonlinear systems

    International Nuclear Information System (INIS)

    Kenton, M.A.

    1981-01-01

    A general sensitivity theory is developed for nonlinear lumped-parameter system simulations. The point-of-departure is general perturbation theory, which has long been used for linear systems in nuclear engineering and reactor physics. The theory allows the sensitivity of particular figures-of-merit of the system behavior to be calculated with respect to any parameter.An explicit procedure is derived for applying the theory to physical systems undergoing sudden events (e.g., reactor scrams, tank ruptures). A related problem, treating figures-of-merit defined as functions of extremal values of system variables occurring at sudden events, is handled by the same procedure. The general calculational scheme for applying the theory to numerical codes is discussed. It is shown that codes which use pre-packaged implicit integration subroutines can be augmented to include sensitivity theory: a companion set of subroutines to solve the sensitivity problem is listed. This combined system analysis code is applied to a simple model for loss of post-accident heat removal in a liquid metal-cooled fast breeder reactor. The uses of the theory for answering more general sensitivity questions are discussed. One application of the theory is to systematically determine whether specific physical processes in a model contribute significantly to the figures-of-merit. Another application of the theory is for selecting parameter values which enable a model to match experimentally observed behavior

  8. Assessment of long-term knowledge retention following single-day simulation training for uncommon but critical obstetrical events

    Science.gov (United States)

    Vadnais, Mary A.; Dodge, Laura E.; Awtrey, Christopher S.; Ricciotti, Hope A.; Golen, Toni H.; Hacker, Michele R.

    2013-01-01

    Objective The objectives were to determine (i) whether simulation training results in short-term and long-term improvement in the management of uncommon but critical obstetrical events and (ii) to determine whether there was additional benefit from annual exposure to the workshop. Methods Physicians completed a pretest to measure knowledge and confidence in the management of eclampsia, shoulder dystocia, postpartum hemorrhage and vacuum-assisted vaginal delivery. They then attended a simulation workshop and immediately completed a posttest. Residents completed the same posttests 4 and 12 months later, and attending physicians completed the posttest at 12 months. Physicians participated in the same simulation workshop 1 year later and then completed a final posttest. Scores were compared using paired t-tests. Results Physicians demonstrated improved knowledge and comfort immediately after simulation. Residents maintained this improvement at 1 year. Attending physicians remained more comfortable managing these scenarios up to 1 year later; however, knowledge retention diminished with time. Repeating the simulation after 1 year brought additional improvement to physicians. Conclusion Simulation training can result in short-term and contribute to long-term improvement in objective measures of knowledge and comfort level in managing uncommon but critical obstetrical events. Repeat exposure to simulation training after 1 year can yield additional benefits. PMID:22191668

  9. CDC WONDER: Vaccine Adverse Event Reporting System (VAERS)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Vaccine Adverse Event Reporting System (VAERS) online database on CDC WONDER provides counts and percentages of adverse event case reports after vaccination, by...

  10. Analysis, Simulation, and Verification of Knowledge-Based, Rule-Based, and Expert Systems

    Science.gov (United States)

    Hinchey, Mike; Rash, James; Erickson, John; Gracanin, Denis; Rouff, Chris

    2010-01-01

    Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rule base as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS.

  11. Event-based computer simulation model of aspect-type experiments strictly satisfying Einstein's locality conditions

    NARCIS (Netherlands)

    De Raedt, Hans; De Raedt, Koen; Michielsen, Kristel; Keimpema, Koenraad; Miyashita, Seiji

    2007-01-01

    Inspired by Einstein-Podolsky-Rosen-Bohtn experiments with photons, we construct an event-based simulation model in which every essential element in the ideal experiment has a counterpart. The model satisfies Einstein's criterion of local causality and does not rely on concepts of quantum and

  12. Combining Latin Hypercube Designs and Discrete Event Simulation in a Study of a Surgical Unit

    DEFF Research Database (Denmark)

    Dehlendorff, Christian; Andersen, Klaus Kaae; Kulahci, Murat

    Summary form given only:In this article experiments on a discrete event simulation model for an orthopedic surgery are considered. The model is developed as part of a larger project in co-operation with Copenhagen University Hospital in Gentofte. Experiments on the model are performed by using...... Latin hypercube designs. The parameter set consists of system settings such as use of preparation room for sedation and the number of operating rooms, as well as management decisions such as staffing, size of the recovery room and the number of simultaneously active operating rooms. Sensitivity analysis...... and optimization combined with meta-modeling are employed in search for optimal setups. The primary objective in this article is to minimize time spent by the patients in the system. The overall long-term objective for the orthopedic surgery unit is to minimize time lost during the pre- and post operation...

  13. Proceedings from Specialists Meeting on human performance in operational events

    International Nuclear Information System (INIS)

    1998-01-01

    This conference on human performance in operational events is composed of 34 papers, grouped in 11 sessions. After an invited contribution on the human factor in the nuclear industry, the sessions are: session 1 (Operational events: Human performance in operational events - how to improve it?, Human performance research strategies for human performance, The development of a model of control room operator cognition), session 2 (Operational response: A study of the recovery from 120 events, Empirical study of the influence of organizational and procedural characteristics on team performance in the emergency situation using plant simulators, Cognitive skills and nuclear power plant operational decision making), session 3 (PSA for Probabilistic Safety Analysis: A sensitivity study of human errors in optimizing surveillance test interval (STI) and allowed outage time (AOT) of standby safety system, Analysis of Parks nuclear power plant personnel activity during safety related event sequences, An EDF project to update the Probabilistic Human Reliability Assessment PHRA methodology), session 4 (modelling with ATHEANA: Atheana, a technique for human error analysis, an overview of its methodological basis, Common elements on operational events across technologies, Results of nuclear power plant application of new technique for human error analysis), session 5 (Regulatory practice: US.NRC Research and analysis activities concerning human reliability assessment and human performance evaluation, Introduction of simulator-based examinations and its effects on the nuclear industry, Regulatory monitoring of human performance in PWR operation in France), session 6 (Simulation: Human performance in Bavarian nuclear power plant as a preventive element, Human performance event database, Crew situation awareness, diagnoses and performance in simulated nuclear power plant process disturbances), session 7 (Operator aids: Development of a plant navigation system, Operation system

  14. Numerical Simulations of an Inversion Fog Event in the Salt Lake Valley during the MATERHORN-Fog Field Campaign

    Science.gov (United States)

    Chachere, Catherine N.; Pu, Zhaoxia

    2018-01-01

    An advanced research version of the Weather Research and Forecasting (WRF) Model is employed to simulate a wintertime inversion fog event in the Salt Lake Valley during the Mountain Terrain Atmospheric Modeling and Observations Program (MATERHORN) field campaign during January 2015. Simulation results are compared to observations obtained from the field program. The sensitivity of numerical simulations to available cloud microphysical (CM), planetary boundary layer (PBL), radiation, and land surface models (LSMs) is evaluated. The influence of differing visibility algorithms and initialization times on simulation results is also examined. Results indicate that the numerical simulations of the fog event are sensitive to the choice of CM, PBL, radiation, and LSM as well as the visibility algorithm and initialization time. Although the majority of experiments accurately captured the synoptic setup environment, errors were found in most experiments within the boundary layer, specifically a 3° warm bias in simulated surface temperatures compared to observations. Accurate representation of surface and boundary layer variables are vital in correctly predicting fog in the numerical model.

  15. Event streaming in the online system

    CERN Document Server

    Klous, S; The ATLAS collaboration

    2010-01-01

    The Large Hadron Collider (LHC), currently in operation at CERN in Geneva, is a circular 27-kilometer-circumference machine, accelerating bunches of protons in opposite directions. The bunches will cross at four different interaction points with a bunch-crossing frequency of 40MHz. ATLAS, the largest LHC experiment, registers the signals induced by particles traversing the detector components on each bunch crossing. When this happens a total of around 1.5MB of data are collected. This results in a data rate of around 60 TB/s flowing out of the detector. Note that the available event storage space is limited to about 6 PB per year. With an operational period of about 20 million seconds per year, this requires a data reduction factor of 200:000 in the trigger and data acquisition (TDAQ) system. Events included in the recording rate budget are already subdivided and organized by ATLAS during data acquisition. So, the TDAQ system does not only take care of data reduction, but also organizes the collected events. ...

  16. Simulator configuration management system

    International Nuclear Information System (INIS)

    Faulent, J.; Brooks, J.G.

    1990-01-01

    The proposed revisions to ANS 3.5-1985 (Section 5) require Utilities to establish a simulator Configuration Management System (CMS). The proposed CMS must be capable of: Establishing and maintaining a simulator design database. Identifying and documenting differences between the simulator and its reference plant. Tracking the resolution of identified differences. Recording data to support simulator certification, testing and maintenance. This paper discusses a CMS capable of meeting the proposed requirements contained in ANS 3.5. The system will utilize a personal computer and a relational database management software to construct a simulator design database. The database will contain records to all reference nuclear plant data used in designing the simulator, as well as records identifying all the software, hardware and documentation making up the simulator. Using the relational powers of the database management software, reports will be generated identifying the impact of reference plant changes on the operation of the simulator. These reports can then be evaluated in terms of training needs to determine if changes are required for the simulator. If a change is authorized, the CMS will track the change through to its resolution and then incorporate the change into the simulator design database

  17. Charge-dependent correlations from event-by-event anomalous hydrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Hirono, Yuji [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794-3800 (United States); Hirano, Tetsufumi [Department of Physics, Sophia University, Tokyo 102-8554 (Japan); Kharzeev, Dmitri E. [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794-3800 (United States); Department of Physics and RIKEN-BNL Research Center, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States)

    2016-12-15

    We report on our recent attempt of quantitative modeling of the Chiral Magnetic Effect (CME) in heavy-ion collisions. We perform 3+1 dimensional anomalous hydrodynamic simulations on an event-by-event basis, with constitutive equations that contain the anomaly-induced effects. We also develop a model of the initial condition for the axial charge density that captures the statistical nature of random chirality imbalances created by the color flux tubes. Basing on the event-by-event hydrodynamic simulations for hundreds of thousands of collisions, we calculate the correlation functions that are measured in experiments, and discuss how the anomalous transport affects these observables.

  18. Catchment & sewer network simulation model to benchmark control strategies within urban wastewater systems

    DEFF Research Database (Denmark)

    Saagi, Ramesh; Flores Alsina, Xavier; Fu, Guangtao

    2016-01-01

    This paper aims at developing a benchmark simulation model to evaluate control strategies for the urban catchment and sewer network. Various modules describing wastewater generation in the catchment, its subsequent transport and storage in the sewer system are presented. Global/local overflow based...... evaluation criteria describing the cumulative and acute effects are presented. Simulation results show that the proposed set of models is capable of generating daily, weekly and seasonal variations as well as describing the effect of rain events on wastewater characteristics. Two sets of case studies...

  19. Computer Simulations, Disclosure and Duty of Care

    Directory of Open Access Journals (Sweden)

    John Barlow

    2006-05-01

    Full Text Available Computer simulations provide cost effective methods for manipulating and modeling 'reality'. However they are not real. They are imitations of a system or event, real or fabricated, and as such mimic, duplicate or represent that system or event. The degree to which a computer simulation aligns with and reproduces the ‘reality’ of the system or event it attempts to mimic or duplicate depends upon many factors including the efficiency of the simulation algorithm, the processing power of the computer hardware used to run the simulation model, and the expertise, assumptions and prejudices of those concerned with designing, implementing and interpreting the simulation output. Computer simulations in particular are increasingly replacing physical experimentation in many disciplines, and as a consequence, are used to underpin quite significant decision-making which may impact on ‘innocent’ third parties. In this context, this paper examines two interrelated issues: Firstly, how much and what kind of information should a simulation builder be required to disclose to potential users of the simulation? Secondly, what are the implications for a decision-maker who acts on the basis of their interpretation of a simulation output without any reference to its veracity, which may in turn comprise the safety of other parties?

  20. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis

    NARCIS (Netherlands)

    A. Tran-Duy (An); A. Boonen (Annelies); M.A.F.J. van de Laar (Mart); A. Franke (Andre); J.L. Severens (Hans)

    2011-01-01

    textabstractObjective: To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Methods: Discrete event simulation paradigm was selected for model

  1. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis

    NARCIS (Netherlands)

    Tran-Duy, A.; Boonen, A.; Laar, M.A.F.J.; Franke, A.C.; Severens, J.L.

    2011-01-01

    Objective To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Methods Discrete event simulation paradigm was selected for model development. Drug

  2. Event-Based Corpuscular Model for Quantum Optics Experiments

    NARCIS (Netherlands)

    Michielsen, K.; Jin, F.; Raedt, H. De

    A corpuscular simulation model of optical phenomena that does not require the knowledge of the solution of a wave equation of the whole system and reproduces the results of Maxwell's theory by generating detection events one-by-one is presented. The event-based corpuscular model is shown to give a

  3. Performance and system flexibility of the CDF Hardware Event Builder

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, T.M.; Schurecht, K. (Fermi National Accelerator Lab., Batavia, IL (United States)); Sinervo, P. (Toronto Univ., ON (Canada). Dept. of Physics)

    1991-11-01

    The CDF Hardware Event Builder (1) is a flexible system which is built from a combination of three different 68020-based single width Fastbus modules. The system may contain as few as three boards or as many as fifteen, depending on the specific application. Functionally, the boards receive a command to read out the raw event data from a set of Fastbus based data buffers ( scanners''), reformat data and then write the data to a Level 3 trigger/processing farm which will decide to throw the event away or to write it to tape. The data acquisition system at CDF will utilize two nine board systems which will allow an event rate of up to 35 Hz into the Level 3 trigger. This paper will present detailed performance factors, system and individual board architecture, and possible system configurations.

  4. Discrete event simulations for glycolysis pathway and energy balance

    NARCIS (Netherlands)

    Zwieten, van D.A.J.; Rooda, J.E.; Armbruster, H.D.; Nagy, J.D.

    2010-01-01

    In this report, the biological network of the glycolysis pathway has been modeled using discrete event models (DEMs). The most important feature of this pathway is that energy is released. To create a stable steady-state system an energy molecule equilibrating enzyme and metabolic reactions have

  5. Analysis of an in-line diesel production system through event driven simulation; Avaliacao do esquema de producao em linha de diesel atraves da simulacao por eventos discretos

    Energy Technology Data Exchange (ETDEWEB)

    Monteiro, Gilsa P.; Naegeli, Guilherme S.T. [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Centro de Pesquisas (CENPES); Santos, Nilza M.Q. [PETROBRAS S.A., Mataripe, Salvador, BA (Brazil). Refinaria Landulfo Alves (RLAM); Netto, Joaquim D.A. [DNV Energy Solutions, Porto Alegre, RS (Brazil)

    2008-07-01

    The interactions between refining processes (such as distillation, hydrotreatment, etc.) and typical transfer and storage operations (mixtures, decantation, storage, etc.) provide a high complexity to the refineries production systems of petroleum derivatives. These production systems are characterized by many aspects, such as: blending rules, feed composition, petroleum campaigns, storage tanks limitations, continuous and batch processes interactions, etc. Besides these operational aspects, the equipment and systems' reliability has strong influence on the level of production goals achievement and petroleum derivatives quality specification. Looking for a higher economic efficiency and in order to provide refineries with orientation about resources optimization for their petroleum derivatives' production systems, the development of a methodology capable of being applied since the design phase to identify systems limitations and improvement opportunities, considering all the raised aspects, is a very important task. With this objective, this article presents the main points of an evaluation that was conducted during the conceptual design for a diesel in-line blending production system proposed by a Brazilian refinery, detailing the main steps of the methodology that was developed through this analysis, based on discrete event simulation. (author)

  6. Discrete event simulation modelling of patient service management with Arena

    Science.gov (United States)

    Guseva, Elena; Varfolomeyeva, Tatyana; Efimova, Irina; Movchan, Irina

    2018-05-01

    This paper describes the simulation modeling methodology aimed to aid in solving the practical problems of the research and analysing the complex systems. The paper gives the review of a simulation platform sand example of simulation model development with Arena 15.0 (Rockwell Automation).The provided example of the simulation model for the patient service management helps to evaluate the workload of the clinic doctors, determine the number of the general practitioners, surgeons, traumatologists and other specialized doctors required for the patient service and develop recommendations to ensure timely delivery of medical care and improve the efficiency of the clinic operation.

  7. Developing future precipitation events from historic events: An Amsterdam case study.

    Science.gov (United States)

    Manola, Iris; van den Hurk, Bart; de Moel, Hans; Aerts, Jeroen

    2016-04-01

    Due to climate change, the frequency and intensity of extreme precipitation events is expected to increase. It is therefore of high importance to develop climate change scenarios tailored towards the local and regional needs of policy makers in order to develop efficient adaptation strategies to reduce the risks from extreme weather events. Current approaches to tailor climate scenarios are often not well adopted in hazard management, since average changes in climate are not a main concern to policy makers, and tailoring climate scenarios to simulate future extremes can be complex. Therefore, a new concept has been introduced recently that uses known historic extreme events as a basis, and modifies the observed data for these events so that the outcome shows how the same event would occur in a warmer climate. This concept is introduced as 'Future Weather', and appeals to the experience of stakeholders and users. This research presents a novel method of projecting a future extreme precipitation event, based on a historic event. The selected precipitation event took place over the broader area of Amsterdam, the Netherlands in the summer of 2014, which resulted in blocked highways, disruption of air transportation, flooded buildings and public facilities. An analysis of rain monitoring stations showed that an event of such intensity has a 5 to 15 years return period. The method of projecting a future event follows a non-linear delta transformation that is applied directly on the observed event assuming a warmer climate to produce an "up-scaled" future precipitation event. The delta transformation is based on the observed behaviour of the precipitation intensity as a function of the dew point temperature during summers. The outcome is then compared to a benchmark method using the HARMONIE numerical weather prediction model, where the boundary conditions of the event from the Ensemble Prediction System of ECMWF (ENS) are perturbed to indicate a warmer climate. The two

  8. A Discrete-Event Simulation Model for Evaluating Air Force Reusable Military Launch Vehicle Post-Landing Operations

    National Research Council Canada - National Science Library

    Martindale, Michael

    2006-01-01

    The purpose of this research was to develop a discrete-event computer simulation model of the post-landing vehicle recoveoperations to allow the Air Force Research Laboratory, Air Vehicles Directorate...

  9. Wavelet spectra of JACEE events

    International Nuclear Information System (INIS)

    Suzuki, Naomichi; Biyajima, Minoru; Ohsawa, Akinori.

    1995-01-01

    Pseudo-rapidity distributions of two high multiplicity events Ca-C and Si-AgBr observed by the JACEE are analyzed by a wavelet transform. Wavelet spectra of those events are calculated and compared with the simulation calculations. The wavelet spectrum of the Ca-C event somewhat resembles that simulated with the uniform random numbers. That of Si-AgBr event, however, is not reproduced by simulation calculations with Poisson random numbers, uniform random numbers, or a p-model. (author)

  10. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  11. A Simple Ensemble Simulation Technique for Assessment of Future Variations in Specific High-Impact Weather Events

    Science.gov (United States)

    Taniguchi, Kenji

    2018-04-01

    To investigate future variations in high-impact weather events, numerous samples are required. For the detailed assessment in a specific region, a high spatial resolution is also required. A simple ensemble simulation technique is proposed in this paper. In the proposed technique, new ensemble members were generated from one basic state vector and two perturbation vectors, which were obtained by lagged average forecasting simulations. Sensitivity experiments with different numbers of ensemble members, different simulation lengths, and different perturbation magnitudes were performed. Experimental application to a global warming study was also implemented for a typhoon event. Ensemble-mean results and ensemble spreads of total precipitation, atmospheric conditions showed similar characteristics across the sensitivity experiments. The frequencies of the maximum total and hourly precipitation also showed similar distributions. These results indicate the robustness of the proposed technique. On the other hand, considerable ensemble spread was found in each ensemble experiment. In addition, the results of the application to a global warming study showed possible variations in the future. These results indicate that the proposed technique is useful for investigating various meteorological phenomena and the impacts of global warming. The results of the ensemble simulations also enable the stochastic evaluation of differences in high-impact weather events. In addition, the impacts of a spectral nudging technique were also examined. The tracks of a typhoon were quite different between cases with and without spectral nudging; however, the ranges of the tracks among ensemble members were comparable. It indicates that spectral nudging does not necessarily suppress ensemble spread.

  12. GRMHD Simulations of Visibility Amplitude Variability for Event Horizon Telescope Images of Sgr A*

    Science.gov (United States)

    Medeiros, Lia; Chan, Chi-kwan; Özel, Feryal; Psaltis, Dimitrios; Kim, Junhan; Marrone, Daniel P.; Sa¸dowski, Aleksander

    2018-04-01

    The Event Horizon Telescope will generate horizon scale images of the black hole in the center of the Milky Way, Sgr A*. Image reconstruction using interferometric visibilities rests on the assumption of a stationary image. We explore the limitations of this assumption using high-cadence disk- and jet-dominated GRMHD simulations of Sgr A*. We also employ analytic models that capture the basic characteristics of the images to understand the origin of the variability in the simulated visibility amplitudes. We find that, in all simulations, the visibility amplitudes for baselines oriented parallel and perpendicular to the spin axis of the black hole follow general trends that do not depend strongly on accretion-flow properties. This suggests that fitting Event Horizon Telescope observations with simple geometric models may lead to a reasonably accurate determination of the orientation of the black hole on the plane of the sky. However, in the disk-dominated models, the locations and depths of the minima in the visibility amplitudes are highly variable and are not related simply to the size of the black hole shadow. This suggests that using time-independent models to infer additional black hole parameters, such as the shadow size or the spin magnitude, will be severely affected by the variability of the accretion flow.

  13. The neural basis of event simulation: an FMRI study.

    Directory of Open Access Journals (Sweden)

    Yukihito Yomogida

    Full Text Available Event simulation (ES is the situational inference process in which perceived event features such as objects, agents, and actions are associated in the brain to represent the whole situation. ES provides a common basis for various cognitive processes, such as perceptual prediction, situational understanding/prediction, and social cognition (such as mentalizing/trait inference. Here, functional magnetic resonance imaging was used to elucidate the neural substrates underlying important subdivisions within ES. First, the study investigated whether ES depends on different neural substrates when it is conducted explicitly and implicitly. Second, the existence of neural substrates specific to the future-prediction component of ES was assessed. Subjects were shown contextually related object pictures implying a situation and performed several picture-word-matching tasks. By varying task goals, subjects were made to infer the implied situation implicitly/explicitly or predict the future consequence of that situation. The results indicate that, whereas implicit ES activated the lateral prefrontal cortex and medial/lateral parietal cortex, explicit ES activated the medial prefrontal cortex, posterior cingulate cortex, and medial/lateral temporal cortex. Additionally, the left temporoparietal junction plays an important role in the future-prediction component of ES. These findings enrich our understanding of the neural substrates of the implicit/explicit/predictive aspects of ES-related cognitive processes.

  14. Building a scalable event-level metadata service for ATLAS

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Goosens, L; Viegas, F T A; McGlone, H

    2008-01-01

    The ATLAS TAG Database is a multi-terabyte event-level metadata selection system, intended to allow discovery, selection of and navigation to events of interest to an analysis. The TAG Database encompasses file- and relational-database-resident event-level metadata, distributed across all ATLAS Tiers. An oracle hosted global TAG relational database, containing all ATLAS events, implemented in Oracle, will exist at Tier O. Implementing a system that is both performant and manageable at this scale is a challenge. A 1 TB relational TAG Database has been deployed at Tier 0 using simulated tag data. The database contains one billion events, each described by two hundred event metadata attributes, and is currently undergoing extensive testing in terms of queries, population and manageability. These 1 TB tests aim to demonstrate and optimise the performance and scalability of an Oracle TAG Database on a global scale. Partitioning and indexing strategies are crucial to well-performing queries and manageability of the database and have implications for database population and distribution, so these are investigated. Physics query patterns are anticipated, but a crucial feature of the system must be to support a broad range of queries across all attributes. Concurrently, event tags from ATLAS Computing System Commissioning distributed simulations are accumulated in an Oracle-hosted database at CERN, providing an event-level selection service valuable for user experience and gathering information about physics query patterns. In this paper we describe the status of the Global TAG relational database scalability work and highlight areas of future direction

  15. Numerical simulation of a winter hailstorm event over Delhi, India on 17 January 2013

    Science.gov (United States)

    Chevuturi, A.; Dimri, A. P.; Gunturu, U. B.

    2014-09-01

    This study analyzes the cause of rare occurrence of winter hailstorm over New Delhi/NCR (National Capital Region), India. The absence of increased surface temperature or low level of moisture incursion during winter cannot generate the deep convection required for sustaining a hailstorm. Consequently, NCR shows very few cases of hailstorms in the months of December-January-February, making the winter hail formation a question of interest. For this study, recent winter hailstorm event on 17 January 2013 (16:00-18:00 UTC) occurring over NCR is investigated. The storm is simulated using Weather Research and Forecasting (WRF) model with Goddard Cumulus Ensemble (GCE) microphysics scheme with two different options, hail or graupel. The aim of the study is to understand and describe the cause of hailstorm event during over NCR with comparative analysis of the two options of GCE microphysics. On evaluating the model simulations, it is observed that hail option shows similar precipitation intensity with TRMM observation than the graupel option and is able to simulate hail precipitation. Using the model simulated output with hail option; detailed investigation on understanding the dynamics of hailstorm is performed. The analysis based on numerical simulation suggests that the deep instability in the atmospheric column led to the formation of hailstones as the cloud formation reached upto the glaciated zone promoting ice nucleation. In winters, such instability conditions rarely form due to low level available potential energy and moisture incursion along with upper level baroclinic instability due to the presence of WD. Such rare positioning is found to be lowering the tropopause with increased temperature gradient, leading to winter hailstorm formation.

  16. Simulation of the catastrophic floods caused by extreme rainfall events - Uh River basin case study

    OpenAIRE

    Pekárová, Pavla; Halmová, Dana; Mitková, Veronika

    2005-01-01

    The extreme rainfall events in Central and East Europe on August 2002 rise the question, how other basins would respond on such rainfall situations. Such theorisation helps us to arrange in advance the necessary activity in the basin to reduce the consequence of the assumed disaster. The aim of the study is to recognise a reaction of the Uh River basin (Slovakia, Ukraine) to the simulated catastrophic rainfall events from August 2002. Two precipitation scenarios, sc1 and sc2, were created. Th...

  17. Multi-spacecraft observations and transport simulations of solar energetic particles for the May 17th 2012 event

    Science.gov (United States)

    Battarbee, M.; Guo, J.; Dalla, S.; Wimmer-Schweingruber, R.; Swalwell, B.; Lawrence, D. J.

    2018-05-01

    Context. The injection, propagation and arrival of solar energetic particles (SEPs) during eruptive solar events is an important and current research topic of heliospheric physics. During the largest solar events, particles may have energies up to a few GeVs and sometimes even trigger ground-level enhancements (GLEs) at Earth. These large SEP events are best investigated through multi-spacecraft observations. Aims: We aim to study the first GLE-event of solar cycle 24, from 17th May 2012, using data from multiple spacecraft (SOHO, GOES, MSL, STEREO-A, STEREO-B and MESSENGER). These spacecraft are located throughout the inner heliosphere, at heliocentric distances between 0.34 and 1.5 astronomical units (au), covering nearly the whole range of heliospheric longitudes. Methods: We present and investigate sub-GeV proton time profiles for the event at several energy channels, obtained via different instruments aboard the above spacecraft. We investigated issues caused by magnetic connectivity, and present results of three-dimensional SEP propagation simulations. We gathered virtual time profiles and perform qualitative and quantitative comparisons with observations, assessed longitudinal injection and transport effects as well as peak intensities. Results: We distinguish different time profile shapes for well-connected and weakly connected observers, and find our onset time analysis to agree with this distinction. At select observers, we identify an additional low-energy component of Energetic Storm Particles (ESPs). Using well-connected observers for normalisation, our simulations are able to accurately recreate both time profile shapes and peak intensities at multiple observer locations. Conclusions: This synergetic approach combining numerical modelling with multi-spacecraft observations is crucial for understanding the propagation of SEPs within the interplanetary magnetic field. Our novel analysis provides valuable proof of the ability to simulate SEP propagation

  18. CESAS: Computerized event sequence abstracting system outlines and applications

    International Nuclear Information System (INIS)

    Watanabe, N.; Kobayashi, K.; Fujiki, K.

    1990-01-01

    For the purpose of efficient utilization of the safety-related event information on the nuclear power plants, a new computer software package CESAS has been under development. CESAS is to systematically abstract the event sequence, that is a series of sequential and causal relationships between occurrences, from the event description written in natural language of English. This system is designed to be based on the knowledge engineering technique utilized in the field of natural language processing. The analytical process in this system consists of morphemic, syntactic, semantic, and syntagmatic analyses. At this moment, the first version of CESAS has been developed and applied to several real event descriptions for studying its feasibility. This paper describes the outlines of CESAS and one of analytical results in comparison with a manually-extracted event sequence

  19. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    Science.gov (United States)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated

  20. Two Hours of Teamwork Training Improves Teamwork in Simulated Cardiopulmonary Arrest Events.

    Science.gov (United States)

    Mahramus, Tara L; Penoyer, Daleen A; Waterval, Eugene M E; Sole, Mary L; Bowe, Eileen M

    2016-01-01

    Teamwork during cardiopulmonary arrest events is important for resuscitation. Teamwork improvement programs are usually lengthy. This study assessed the effectiveness of a 2-hour teamwork training program. A prospective, pretest/posttest, quasi-experimental design assessed the teamwork training program targeted to resident physicians, nurses, and respiratory therapists. Participants took part in a simulated cardiac arrest. After the simulation, participants and trained observers assessed perceptions of teamwork using the Team Emergency Assessment Measure (TEAM) tool (ratings of 0 [low] to 4 [high]). A debriefing and 45 minutes of teamwork education followed. Participants then took part in a second simulated cardiac arrest scenario. Afterward, participants and observers assessed teamwork. Seventy-three team members participated-resident physicians (25%), registered nurses (32%), and respiratory therapists (41%). The physicians had significantly less experience on code teams (P teamwork scores were 2.57 to 2.72. Participants' mean (SD) scores on the TEAM tool for the first and second simulations were 3.2 (0.5) and 3.7 (0.4), respectively (P teamwork educational intervention resulted in improved perceptions of teamwork behaviors. Participants reported interactions with other disciplines, teamwork behavior education, and debriefing sessions were beneficial for enhancing the program.

  1. HELIOS/DRAGON/NESTLE codes' simulation of the Gentilly-2 loss of class 4 power event

    International Nuclear Information System (INIS)

    Sarsour, H.N.; Turinsky, P.J.; Rahnema, F.; Mosher, S.; Serghiuta, D.; Marleau, G.; Courau, T.

    2002-01-01

    A loss of electrical power occurred at Gentilly-2 in September of 1995 while the station was operating at full power. There was an unexpectedly rapid core power increase initiated by the drainage of the zone controllers and accelerated by coolant boiling. The core transient was terminated by Shutdown System No 1 (SDS1) tripping when the out-of-core ion chambers exceeded the 10%/sec high rate of power increase trip setpoint at 1.29 sec. This resulted in the station automatically shutting down within 2 sec of event initiation. In the first 2 sec, 26 of the 58 SDS1 and SDS2 in-core flux detectors reached there overpower trip (ROPT) setpoints. The peak reactor power reached approximately 110%FP. Reference 1 presented detailed results of the simulations performed with coupled thermalhydraulics and 3D neutron kinetics codes, SOPHT-G2 and the CERBERUS module of RFSP, and the various adjustments of these codes and plant representation that were needed to obtain the neutronic response observed in 1995. The purposes of this paper are to contrast a simulation prediction of the peak prompt core thermal power transient versus experimental estimate, and to note the impact of spatial discretization approach utilized on the prompt core thermal power transient and the channel power distribution as a function of time. In addition, adequacy of the time-step sizes employed and sensitivity to core's transient thermal-hydraulics conditions are studied. The work presented in this paper has been performed as part of a project sponsored by the Canadian Nuclear Safety Commission (CNSC). The purpose of the project was to gather information and assess the accuracy of best estimate methods using calculation methods and codes developed independently from the CANDU industry. The simulation of the accident was completed using the NESTLE core simulator, employing cross sections generated by the HELIOS lattice physics code, and incremental cross sections generated by the DRAGON lattice physics code

  2. Grid production with the ATLAS Event Service

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration

    2018-01-01

    ATLAS has developed and previously presented a new computing architecture, the Event Service, that allows real time delivery of fine grained workloads which process dispatched events (or event ranges) and immediately streams outputs. The principal aim was to profit from opportunistic resources such as commercial cloud, supercomputing, and volunteer computing, and otherwise unused cycles on clusters and grids. During the development and deployment phase, its utility also on the grid and conventional clusters for the exploitation of otherwise unused cycles became apparent. Here we describe our experience commissioning the Event Service on the grid in the ATLAS production system. We study the performance compared with standard simulation production. We describe the integration with the ATLAS data management system to ensure scalability and compatibility with object stores. Finally, we outline the remaining steps towards a fully commissioned system.

  3. Integrating Urban Infrastructure and Health System Impact Modeling for Disasters and Mass-Casualty Events

    Science.gov (United States)

    Balbus, J. M.; Kirsch, T.; Mitrani-Reiser, J.

    2017-12-01

    Over recent decades, natural disasters and mass-casualty events in United States have repeatedly revealed the serious consequences of health care facility vulnerability and the subsequent ability to deliver care for the affected people. Advances in predictive modeling and vulnerability assessment for health care facility failure, integrated infrastructure, and extreme weather events have now enabled a more rigorous scientific approach to evaluating health care system vulnerability and assessing impacts of natural and human disasters as well as the value of specific interventions. Concurrent advances in computing capacity also allow, for the first time, full integration of these multiple individual models, along with the modeling of population behaviors and mass casualty responses during a disaster. A team of federal and academic investigators led by the National Center for Disaster Medicine and Public Health (NCDMPH) is develoing a platform for integrating extreme event forecasts, health risk/impact assessment and population simulations, critical infrastructure (electrical, water, transportation, communication) impact and response models, health care facility-specific vulnerability and failure assessments, and health system/patient flow responses. The integration of these models is intended to develop much greater understanding of critical tipping points in the vulnerability of health systems during natural and human disasters and build an evidence base for specific interventions. Development of such a modeling platform will greatly facilitate the assessment of potential concurrent or sequential catastrophic events, such as a terrorism act following a severe heat wave or hurricane. This presentation will highlight the development of this modeling platform as well as applications not just for the US health system, but also for international science-based disaster risk reduction efforts, such as the Sendai Framework and the WHO SMART hospital project.

  4. Early notification of the environmental radiation monitoring system to a radioactive event

    International Nuclear Information System (INIS)

    Haquin, G.; Ne'eman, F; Brenner, S.

    1997-01-01

    The National Environmental Radiation Monitoring System managed by the Radiation Safety Division of the Ministry of tile Environment has been completed and is composed of a network of 10 stations; 6 terrestrial stations, 3 waterside stations and one mobile. The system was built by Rotem Co. and the control center is located at the Unit of Environmental Resources of the Ministry of the Environment in Tel Aviv University. Each station consists of a wide range Geiger Mueller detector and ambient dose rate meter that provides the level of the environmental dose rate. Low level radioactive particles are detected by air sampling with devices that collect suspended and settling particles . Each station is connected to the control center through telephone lines and RF communication system providing 24 hour a day the level of the environmental radiation. The background radiation dose rate level depends on the location of the station and varies from 8 - 16 μR/h. The system has proved its efficiency in a 'simulation like event' early detecting an unregistered gamma radiography work in the proximity of two stations performed in June 96 in Ashdod port and in December 96 at Maspenot Israel in Haifa. During the events the radiation level increased up to 20 times above the background level. Survey teams of the Ashdod port and Maspenot Israel were sent to place to check the sources for the radiation level increase. These teams found workers performing radiography work in the area of the stations. (authors)

  5. Symbolic simulation of engineering systems on a supercomputer

    International Nuclear Information System (INIS)

    Ragheb, M.; Gvillo, D.; Makowitz, H.

    1986-01-01

    Model-Based Production-Rule systems for analysis are developed for the symbolic simulation of Complex Engineering systems on a CRAY X-MP Supercomputer. The Fault-Tree and Event-Tree Analysis methodologies from Systems-Analysis are used for problem representation and are coupled to the Rule-Based System Paradigm from Knowledge Engineering to provide modelling of engineering devices. Modelling is based on knowledge of the structure and function of the device rather than on human expertise alone. To implement the methodology, we developed a production-Rule Analysis System that uses both backward-chaining and forward-chaining: HAL-1986. The inference engine uses an Induction-Deduction-Oriented antecedent-consequent logic and is programmed in Portable Standard Lisp (PSL). The inference engine is general and can accommodate general modifications and additions to the knowledge base. The methodologies used will be demonstrated using a model for the identification of faults, and subsequent recovery from abnormal situations in Nuclear Reactor Safety Analysis. The use of the exposed methodologies for the prognostication of future device responses under operational and accident conditions using coupled symbolic and procedural programming is discussed

  6. Event-triggered decentralized robust model predictive control for constrained large-scale interconnected systems

    Directory of Open Access Journals (Sweden)

    Ling Lu

    2016-12-01

    Full Text Available This paper considers the problem of event-triggered decentralized model predictive control (MPC for constrained large-scale linear systems subject to additive bounded disturbances. The constraint tightening method is utilized to formulate the MPC optimization problem. The local predictive control law for each subsystem is determined aperiodically by relevant triggering rule which allows a considerable reduction of the computational load. And then, the robust feasibility and closed-loop stability are proved and it is shown that every subsystem state will be driven into a robust invariant set. Finally, the effectiveness of the proposed approach is illustrated via numerical simulations.

  7. Trunk muscle recruitment patterns in simulated precrash events.

    Science.gov (United States)

    Ólafsdóttir, Jóna Marín; Fice, Jason B; Mang, Daniel W H; Brolin, Karin; Davidsson, Johan; Blouin, Jean-Sébastien; Siegmund, Gunter P

    2018-02-28

    To quantify trunk muscle activation levels during whole body accelerations that simulate precrash events in multiple directions and to identify recruitment patterns for the development of active human body models. Four subjects (1 female, 3 males) were accelerated at 0.55 g (net Δv = 4.0 m/s) in 8 directions while seated on a sled-mounted car seat to simulate a precrash pulse. Electromyographic (EMG) activity in 4 trunk muscles was measured using wire electrodes inserted into the left rectus abdominis, internal oblique, iliocostalis, and multifidus muscles at the L2-L3 level. Muscle activity evoked by the perturbations was normalized by each muscle's isometric maximum voluntary contraction (MVC) activity. Spatial tuning curves were plotted at 150, 300, and 600 ms after acceleration onset. EMG activity remained below 40% MVC for the three time points for most directions. At the 150- and 300 ms time points, the highest EMG amplitudes were observed during perturbations to the left (-90°) and left rearward (-135°). EMG activity diminished by 600 ms for the anterior muscles, but not for the posterior muscles. These preliminary results suggest that trunk muscle activity may be directionally tuned at the acceleration level tested here. Although data from more subjects are needed, these preliminary data support the development of modeled trunk muscle recruitment strategies in active human body models that predict occupant responses in precrash scenarios.

  8. Feedback control systems for non-linear simulation of operational transients in LMFBRs

    International Nuclear Information System (INIS)

    Khatib-Rahbar, M.; Agrawal, A.K.; Srinivasan, E.S.

    Adequate modeling of Plant Control Systems (PCS) for the study of Anticipated Transients Without Scram (ATWS) is of considerable significance in the design, operation and safety evaluation of Liquid-Metal-Cooled Fast Breeder Reactor (LMFBR) systems. To assess the system response to high frequency, low consequence events, the plant needs to be dynamically simulated. The description of analytical and numerical models for PCS that have been developed and incorporated into the loop version of the Super System Code (SSC-L) are described. The importance of detailed modeling of control systems is discussed. Sample transient results obtained for a 10% ramp change of load in 40 s in the Clinch River Breeder Reactor Plant (CRBRP) are also shown

  9. Simulator for an Accelerator-Driven Subcritical Fissile Solution System

    International Nuclear Information System (INIS)

    Klein, Steven Karl; Day, Christy M.; Determan, John C.

    2015-01-01

    LANL has developed a process to generate a progressive family of system models for a fissile solution system. This family includes a dynamic system simulation comprised of coupled nonlinear differential equations describing the time evolution of the system. Neutron kinetics, radiolytic gas generation and transport, and core thermal hydraulics are included in the DSS. Extensions to explicit operation of cooling loops and radiolytic gas handling are embedded in these systems as is a stability model. The DSS may then be converted to an implementation in Visual Studio to provide a design team the ability to rapidly estimate system performance impacts from a variety of design decisions. This provides a method to assist in optimization of the system design. Once design has been generated in some detail the C++ version of the system model may then be implemented in a LabVIEW user interface to evaluate operator controls and instrumentation and operator recognition and response to off-normal events. Taken as a set of system models the DSS, Visual Studio, and LabVIEW progression provides a comprehensive set of design support tools.

  10. Simulator for an Accelerator-Driven Subcritical Fissile Solution System

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Day, Christy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Determan, John C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-09-14

    LANL has developed a process to generate a progressive family of system models for a fissile solution system. This family includes a dynamic system simulation comprised of coupled nonlinear differential equations describing the time evolution of the system. Neutron kinetics, radiolytic gas generation and transport, and core thermal hydraulics are included in the DSS. Extensions to explicit operation of cooling loops and radiolytic gas handling are embedded in these systems as is a stability model. The DSS may then be converted to an implementation in Visual Studio to provide a design team the ability to rapidly estimate system performance impacts from a variety of design decisions. This provides a method to assist in optimization of the system design. Once design has been generated in some detail the C++ version of the system model may then be implemented in a LabVIEW user interface to evaluate operator controls and instrumentation and operator recognition and response to off-normal events. Taken as a set of system models the DSS, Visual Studio, and LabVIEW progression provides a comprehensive set of design support tools.

  11. Event tree analysis for the system of hybrid reactor

    International Nuclear Information System (INIS)

    Yang Yongwei; Qiu Lijian

    1993-01-01

    The application of probabilistic risk assessment for fusion-fission hybrid reactor is introduced. A hybrid reactor system has been analysed using event trees. According to the character of the conceptual design of Hefei Fusion-fission Experimental Hybrid Breeding Reactor, the probabilities of the event tree series induced by 4 typical initiating events were calculated. The results showed that the conceptual design is safe and reasonable. through this paper, the safety character of hybrid reactor system has been understood more deeply. Some suggestions valuable to safety design for hybrid reactor have been proposed

  12. Simulation bounds for system availability

    International Nuclear Information System (INIS)

    Tietjen, G.L.; Waller, R.A.

    1976-01-01

    System availability is a dominant factor in the practicality of nuclear power electrical generating plants. A proposed model for obtaining either lower bounds or interval estimates on availability uses observed data on ''n'' failure-to-repair cycles of the system to estimate the parameters in the time-to-failure and time-to-repair models. These estimates are then used in simulating failure/repair cycles of the system. The availability estimate is obtained for each of 5000 samples of ''n'' failure/repair cycles to form a distribution of estimates. Specific percentile points of those simulated distributions are selected as lower simulation bounds or simulation interval bounds for the system availability. The method is illustrated with operational data from two nuclear plants for which an exponential time-to-failure and a lognormal time-to-repair are assumed

  13. Methodology Development of Computationally-Efficient Full Vehicle Simulations for the Entire Blast Event

    Science.gov (United States)

    2015-08-06

    NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER Altair Engineering 888 W Big Beaver Road #402 Troy MI 48084...soldiers, it is imperative to analyze impact of each sub-event on soldier injuries. Using traditional finite element analysis techniques [1-6] to...CONSTRAINED_LAGRANGE_IN_SOLID) and the results from another commonly used non-linear explicit solver for impact simulations (RADIOSS, [4]) using a coupling

  14. Event Reconstruction Algorithms for the ATLAS Trigger

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca-Martin, T.; /CERN; Abolins, M.; /Michigan State U.; Adragna, P.; /Queen Mary, U. of London; Aleksandrov, E.; /Dubna, JINR; Aleksandrov, I.; /Dubna, JINR; Amorim, A.; /Lisbon, LIFEP; Anderson, K.; /Chicago U., EFI; Anduaga, X.; /La Plata U.; Aracena, I.; /SLAC; Asquith, L.; /University Coll. London; Avolio, G.; /CERN; Backlund, S.; /CERN; Badescu, E.; /Bucharest, IFIN-HH; Baines, J.; /Rutherford; Barria, P.; /Rome U. /INFN, Rome; Bartoldus, R.; /SLAC; Batreanu, S.; /Bucharest, IFIN-HH /CERN; Beck, H.P.; /Bern U.; Bee, C.; /Marseille, CPPM; Bell, P.; /Manchester U.; Bell, W.H.; /Glasgow U. /Pavia U. /INFN, Pavia /Regina U. /CERN /Annecy, LAPP /Paris, IN2P3 /Royal Holloway, U. of London /Napoli Seconda U. /INFN, Naples /Argonne /CERN /UC, Irvine /Barcelona, IFAE /Barcelona, Autonoma U. /CERN /Montreal U. /CERN /Glasgow U. /Michigan State U. /Bucharest, IFIN-HH /Napoli Seconda U. /INFN, Naples /New York U. /Barcelona, IFAE /Barcelona, Autonoma U. /Salento U. /INFN, Lecce /Pisa U. /INFN, Pisa /Bucharest, IFIN-HH /UC, Irvine /CERN /Glasgow U. /INFN, Genoa /Genoa U. /Lisbon, LIFEP /Napoli Seconda U. /INFN, Naples /UC, Irvine /Valencia U. /Rio de Janeiro Federal U. /University Coll. London /New York U.; /more authors..

    2011-11-09

    The ATLAS experiment under construction at CERN is due to begin operation at the end of 2007. The detector will record the results of proton-proton collisions at a center-of-mass energy of 14 TeV. The trigger is a three-tier system designed to identify in real-time potentially interesting events that are then saved for detailed offline analysis. The trigger system will select approximately 200 Hz of potentially interesting events out of the 40 MHz bunch-crossing rate (with 10{sup 9} interactions per second at the nominal luminosity). Algorithms used in the trigger system to identify different event features of interest will be described, as well as their expected performance in terms of selection efficiency, background rejection and computation time per event. The talk will concentrate on recent improvements and on performance studies, using a very detailed simulation of the ATLAS detector and electronics chain that emulates the raw data as it will appear at the input to the trigger system.

  15. Event reconstruction algorithms for the ATLAS trigger

    Energy Technology Data Exchange (ETDEWEB)

    F-Martin, T; Avolio, G; Backlund, S [European Laboratory for Particle Physics (CERN), Geneva (Switzerland); Abolins, M [Michigan State University, Department of Physics and Astronomy, East Lansing, Michigan (United States); Adragna, P [Department of Physics, Queen Mary and Westfield College, University of London, London (United Kingdom); Aleksandrov, E; Aleksandrov, I [Joint Institute for Nuclear Research, Dubna (Russian Federation); Amorim, A [Laboratorio de Instrumentacao e Fisica Experimental, Lisboa (Portugal); Anderson, K [University of Chicago, Enrico Fermi Institute, Chicago, Illinois (United States); Anduaga, X [National University of La Plata, La Plata (United States); Aracena, I; Bartoldus, R [Stanford Linear Accelerator Center (SLAC), Stanford (United States); Asquith, L [Department of Physics and Astronomy, University College London, London (United Kingdom); Badescu, E [National Institute for Physics and Nuclear Engineering, Institute of Atomic Physics, Bucharest (Romania); Baines, J [Rutherford Appleton Laboratory, Chilton, Didcot (United Kingdom); Beck, H P [Laboratory for High Energy Physics, University of Bern, Bern (Switzerland); Bee, C [Centre de Physique des Particules de Marseille, IN2P3-CNRS, Marseille (France); Bell, P [Department of Physics and Astronomy, University of Manchester, Manchester (United Kingdom); Barria, P; Batreanu, S [and others

    2008-07-01

    The ATLAS experiment under construction at CERN is due to begin operation at the end of 2007. The detector will record the results of proton-proton collisions at a center-of-mass energy of 14 TeV. The trigger is a three-tier system designed to identify in real-time potentially interesting events that are then saved for detailed offline analysis. The trigger system will select approximately 200 Hz of potentially interesting events out of the 40 MHz bunch-crossing rate (with 10{sup 9} interactions per second at the nominal luminosity). Algorithms used in the trigger system to identify different event features of interest will be described, as well as their expected performance in terms of selection efficiency, background rejection and computation time per event. The talk will concentrate on recent improvements and on performance studies, using a very detailed simulation of the ATLAS detector and electronics chain that emulates the raw data as it will appear at the input to the trigger system.

  16. Event reconstruction algorithms for the ATLAS trigger

    International Nuclear Information System (INIS)

    F-Martin, T; Avolio, G; Backlund, S; Abolins, M; Adragna, P; Aleksandrov, E; Aleksandrov, I; Amorim, A; Anderson, K; Anduaga, X; Aracena, I; Bartoldus, R; Asquith, L; Badescu, E; Baines, J; Beck, H P; Bee, C; Bell, P; Barria, P; Batreanu, S

    2008-01-01

    The ATLAS experiment under construction at CERN is due to begin operation at the end of 2007. The detector will record the results of proton-proton collisions at a center-of-mass energy of 14 TeV. The trigger is a three-tier system designed to identify in real-time potentially interesting events that are then saved for detailed offline analysis. The trigger system will select approximately 200 Hz of potentially interesting events out of the 40 MHz bunch-crossing rate (with 10 9 interactions per second at the nominal luminosity). Algorithms used in the trigger system to identify different event features of interest will be described, as well as their expected performance in terms of selection efficiency, background rejection and computation time per event. The talk will concentrate on recent improvements and on performance studies, using a very detailed simulation of the ATLAS detector and electronics chain that emulates the raw data as it will appear at the input to the trigger system

  17. Model predictive control-based scheduler for repetitive discrete event systems with capacity constraints

    Directory of Open Access Journals (Sweden)

    Hiroyuki Goto

    2013-07-01

    Full Text Available A model predictive control-based scheduler for a class of discrete event systems is designed and developed. We focus on repetitive, multiple-input, multiple-output, and directed acyclic graph structured systems on which capacity constraints can be imposed. The target system’s behaviour is described by linear equations in max-plus algebra, referred to as state-space representation. Assuming that the system’s performance can be improved by paying additional cost, we adjust the system parameters and determine control inputs for which the reference output signals can be observed. The main contribution of this research is twofold, 1: For systems with capacity constraints, we derived an output prediction equation as functions of adjustable variables in a recursive form, 2: Regarding the construct for the system’s representation, we improved the structure to accomplish general operations which are essential for adjusting the system parameters. The result of numerical simulation in a later section demonstrates the effectiveness of the developed controller.

  18. Development of AC-DC power system simulator

    International Nuclear Information System (INIS)

    Ichikawa, Tatsumi; Ueda, Kiyotaka; Inoue, Toshio

    1984-01-01

    A modeling and realization technique is described for realtime plant dynamics simulation of nuclear power generating unit in AC-DC power system simulator. Dynamic behavior of reactor system and steam system is important for investigation a further adequate unit control and protection system to system faults in AC and DC power system. Each unit of two nuclear power generating unit in the power system simulator consists of micro generator, DC motors, flywheels and process computer. The DC motor and flywheel simulates dynamic characteristics of steam turbine, and process computer simulates plant dynamics by digital simulation. We have realized real-time plant dynamics simulation by utilizing a high speed process I/O and a high speed digital differential analyzing processor (DDA) in which we builted a newly developed simple plant model. (author)

  19. Design a Learning-Oriented Fall Event Reporting System Based on Kirkpatrick Model.

    Science.gov (United States)

    Zhou, Sicheng; Kang, Hong; Gong, Yang

    2017-01-01

    Patient fall has been a severe problem in healthcare facilities around the world due to its prevalence and cost. Routine fall prevention training programs are not as effective as expected. Using event reporting systems is the trend for reducing patient safety events such as falls, although some limitations of the systems exist at current stage. We summarized these limitations through literature review, and developed an improved web-based fall event reporting system. The Kirkpatrick model, widely used in the business area for training program evaluation, has been integrated during the design of our system. Different from traditional event reporting systems that only collect and store the reports, our system automatically annotates and analyzes the reported events, and provides users with timely knowledge support specific to the reported event. The paper illustrates the design of our system and how its features are intended to reduce patient falls by learning from previous errors.

  20. Development of the simulation monitoring system

    International Nuclear Information System (INIS)

    Kato, Katsumi; Watanabe, Tadashi; Kume, Etsuo

    2001-01-01

    Large-scale simulation technique is studied at the Center for Promotion of Computational Science and Engineering for the computational science research in nuclear fields. Visualization and animation processing techniques are developed for efficient understanding of simulation results. The development of the simulation monitoring system, which is used for real-time visualization of ongoing simulations or for successive visualization of calculated results, is described in this report. The standard visualization tool AVS5 or AVS/EXPRESS is used for the simulation monitoring system, and thus, this system can be utilized in various computer environments. (author)

  1. Managing the risk of extreme climate events in Australian major wheat production systems

    Science.gov (United States)

    Luo, Qunying; Trethowan, Richard; Tan, Daniel K. Y.

    2018-06-01

    Extreme climate events (ECEs) such as drought, frost risk and heat stress cause significant economic losses in Australia. The risk posed by ECEs in the wheat production systems of Australia could be better managed through the identification of safe flowering (SFW) and optimal time of sowing (TOS) windows. To address this issue, three locations (Narrabri, Roseworthy and Merredin), three cultivars (Suntop and Gregory for Narrabri, Mace for both Roseworthy and Merredin) and 20 TOS at 1-week intervals between 1 April and 12 August for the period from 1957 to 2007 were evaluated using the Agricultural Production System sIMulator (APSIM)-Wheat model. Simulation results show that (1) the average frequency of frost events decreased with TOS from 8 to 0 days (d) across the four cases (the combination of locations and cultivars), (2) the average frequency of heat stress events increased with TOS across all cases from 0 to 10 d, (3) soil moisture stress (SMS) increased with earlier TOS before reaching a plateau and then slightly decreasing for Suntop and Gregory at Narrabri and Mace at Roseworthy while SMS increased with TOS for Mace at Merredin from 0.1 to 0.8, (4) Mace at Merredin had the earliest and widest SFW (216-260) while Mace at Roseworthy had latest SFW (257-280), (5) frost risk and heat stress determine SFW at wetter sites (i.e. Narrabri and Roseworthy) while frost risk and SMS determine SFW at drier site (i.e. Merredin) and (6) the optimal TOS (window) to maximise wheat yield are 6-20 May, 13-27 May and 15 April at Narrabri, Roseworthy and Merredin, respectively. These findings provide important and specific information for wheat growers about the management of ECE risk on farm. Furthermore, the coupling of the APSIM crop models with state-of-the-art seasonal and intra-seasonal climate forecast information provides an important tool for improved management of the risk of ECEs in economically important cropping industries in the foreseeable future.

  2. Issues in visual support to real-time space system simulation solved in the Systems Engineering Simulator

    Science.gov (United States)

    Yuen, Vincent K.

    1989-01-01

    The Systems Engineering Simulator has addressed the major issues in providing visual data to its real-time man-in-the-loop simulations. Out-the-window views and CCTV views are provided by three scene systems to give the astronauts their real-world views. To expand the window coverage for the Space Station Freedom workstation a rotating optics system is used to provide the widest field of view possible. To provide video signals to as many viewpoints as possible, windows and CCTVs, with a limited amount of hardware, a video distribution system has been developed to time-share the video channels among viewpoints at the selection of the simulation users. These solutions have provided the visual simulation facility for real-time man-in-the-loop simulations for the NASA space program.

  3. Simulation-based expert system for nuclear reactor control and diagnostics. Progress report

    International Nuclear Information System (INIS)

    Lee, J.C.; Martin, W.R.

    1986-01-01

    This research concerns the development of artificial intelligence (AI) techniques suitable for application to the diagnostics and control of nuclear power plant systems. The overall objective of the current effort is to build a prototype simulation-based expert system for diagnosing accidents in nuclear reactors. The system is being designed to analyze plant data heuristically using fuzzy logic to form a set of hypotheses about a particular transient. Hypothesis testing, fault magnitude estimation and transient analysis is performed using simulation programs to model plant behavior. An adaptive learning technique has been developed for achieving accurate simulations of plant dynamics using low-order physical models of plant components. The results of the diagnostics and simulation analysis of the plant transient are to be analyzed by an expert system for final diagnoses and control guidance. To date, significant progress has been made toward achieving the primary goals of this project. Based on a critical safety functions approach, an overall design for the nuclear plant expert system has been developed. The methodology for performing diagnostic reasoning on plant signals has been developed and the algorithms implemented and tested. A methodology for utilizing the information contained in the physical models of plant components has also been developed. This work included the derivation of a unique Kalman filtering algorithm for using power plant data to systematically improve on-line simulations through the judicious adjustment of key model parameters. A few simulation models of key plant components have been developed and implemented to demonstrate the method on a realistic accident scenario. The chosen transient is a loss of feed flow exasperated by a stuck open relief valve, similar to the initiating event of the Three Mile Island Unit 2 accident in 1979

  4. Simulating the energy deposits of particles in the KASCADE-grande detector stations as a preliminary step for EAS event reconstruction

    International Nuclear Information System (INIS)

    Toma, G.; Brancus, I.M.; Mitrica, B.; Sima, O.; Rebel, H.; Haungs, A.

    2005-01-01

    The study of primary cosmic rays with energies higher than 10 14 eV is done mostly by indirect observation techniques such as the study of Extensive Air Showers (EAS). In the much larger framework effort of inferring data on the mass and energy of the primaries from EAS observables, the present study aims at developing a versatile method and software tool that will be used to reconstruct lateral particle densities from the energy deposits of particles in the KASCADE-Grande detector stations. The study has been performed on simulated events, by taking into account the interaction of the EAS components with the detector array (energy deposits). The energy deposits have been simulated using the GEANT code and then the energy deposits have been parametrized for different incident energies and angles of EAS particles. Thus the results obtained for simulated events have the same level of consistency as the experimental data. This technique will allow an increased speed of lateral particle density reconstruction when studying real events detected by the KASCADE-Grande array. The particle densities in detectors have been reconstructed from the energy deposits. A correlation between lateral particle density and primary mass and primary energy (at ∼600 m from shower core) has been established. The study puts great emphasis on the quality of reconstruction and also on the speed of the technique. The data obtained from the study on simulated events creates the basis for the next stage of the study, the study of real events detected by the KASCADE-Grande array. (authors)

  5. Implementation of Tree and Butterfly Barriers with Optimistic Time Management Algorithms for Discrete Event Simulation

    Science.gov (United States)

    Rizvi, Syed S.; Shah, Dipali; Riasat, Aasia

    The Time Wrap algorithm [3] offers a run time recovery mechanism that deals with the causality errors. These run time recovery mechanisms consists of rollback, anti-message, and Global Virtual Time (GVT) techniques. For rollback, there is a need to compute GVT which is used in discrete-event simulation to reclaim the memory, commit the output, detect the termination, and handle the errors. However, the computation of GVT requires dealing with transient message problem and the simultaneous reporting problem. These problems can be dealt in an efficient manner by the Samadi's algorithm [8] which works fine in the presence of causality errors. However, the performance of both Time Wrap and Samadi's algorithms depends on the latency involve in GVT computation. Both algorithms give poor latency for large simulation systems especially in the presence of causality errors. To improve the latency and reduce the processor ideal time, we implement tree and butterflies barriers with the optimistic algorithm. Our analysis shows that the use of synchronous barriers such as tree and butterfly with the optimistic algorithm not only minimizes the GVT latency but also minimizes the processor idle time.

  6. Model simulation of the Manasquan water-supply system in Monmouth County, New Jersey

    Science.gov (United States)

    Chang, Ming; Tasker, Gary D.; Nieswand, Steven

    2001-01-01

    Model simulation of the Manasquan Water Supply System in Monmouth County, New Jersey, was completed using historic hydrologic data to evaluate the effects of operational and withdrawal alternatives on the Manasquan reservoir and pumping system. Changes in the system operations can be simulated with the model using precipitation forecasts. The Manasquan Reservoir system model operates by using daily streamflow values, which were reconstructed from historical U.S. Geological Survey streamflow-gaging station records. The model is able to run in two modes--General Risk analysis Model (GRAM) and Position Analysis Model (POSA). The GRAM simulation procedure uses reconstructed historical streamflow records to provide probability estimates of certain events, such as reservoir storage levels declining below a specific level, when given an assumed set of operating rules and withdrawal rates. POSA can be used to forecast the likelihood of specified outcomes, such as streamflows falling below statutory passing flows, associated with a specific working plan for the water-supply system over a period of months. The user can manipulate the model and generate graphs and tables of streamflows and storage, for example. This model can be used as a management tool to facilitate the development of drought warning and drought emergency rule curves and safe yield values for the water-supply system.

  7. A SAS-based solution to evaluate study design efficiency of phase I pediatric oncology trials via discrete event simulation.

    Science.gov (United States)

    Barrett, Jeffrey S; Jayaraman, Bhuvana; Patel, Dimple; Skolnik, Jeffrey M

    2008-06-01

    Previous exploration of oncology study design efficiency has focused on Markov processes alone (probability-based events) without consideration for time dependencies. Barriers to study completion include time delays associated with patient accrual, inevaluability (IE), time to dose limiting toxicities (DLT) and administrative and review time. Discrete event simulation (DES) can incorporate probability-based assignment of DLT and IE frequency, correlated with cohort in the case of DLT, with time-based events defined by stochastic relationships. A SAS-based solution to examine study efficiency metrics and evaluate design modifications that would improve study efficiency is presented. Virtual patients are simulated with attributes defined from prior distributions of relevant patient characteristics. Study population datasets are read into SAS macros which select patients and enroll them into a study based on the specific design criteria if the study is open to enrollment. Waiting times, arrival times and time to study events are also sampled from prior distributions; post-processing of study simulations is provided within the decision macros and compared across designs in a separate post-processing algorithm. This solution is examined via comparison of the standard 3+3 decision rule relative to the "rolling 6" design, a newly proposed enrollment strategy for the phase I pediatric oncology setting.

  8. Patient flow improvement for an ophthalmic specialist outpatient clinic with aid of discrete event simulation and design of experiment.

    Science.gov (United States)

    Pan, Chong; Zhang, Dali; Kon, Audrey Wan Mei; Wai, Charity Sue Lea; Ang, Woo Boon

    2015-06-01

    Continuous improvement in process efficiency for specialist outpatient clinic (SOC) systems is increasingly being demanded due to the growth of the patient population in Singapore. In this paper, we propose a discrete event simulation (DES) model to represent the patient and information flow in an ophthalmic SOC system in the Singapore National Eye Centre (SNEC). Different improvement strategies to reduce the turnaround time for patients in the SOC were proposed and evaluated with the aid of the DES model and the Design of Experiment (DOE). Two strategies for better patient appointment scheduling and one strategy for dilation-free examination are estimated to have a significant impact on turnaround time for patients. One of the improvement strategies has been implemented in the actual SOC system in the SNEC with promising improvement reported.

  9. Simulating X-ray bursts during a transient accretion event

    Science.gov (United States)

    Johnston, Zac; Heger, Alexander; Galloway, Duncan K.

    2018-06-01

    Modelling of thermonuclear X-ray bursts on accreting neutron stars has to date focused on stable accretion rates. However, bursts are also observed during episodes of transient accretion. During such events, the accretion rate can evolve significantly between bursts, and this regime provides a unique test for burst models. The accretion-powered millisecond pulsar SAX J1808.4-3658 exhibits accretion outbursts every 2-3 yr. During the well-sampled month-long outburst of 2002 October, four helium-rich X-ray bursts were observed. Using this event as a test case, we present the first multizone simulations of X-ray bursts under a time-dependent accretion rate. We investigate the effect of using a time-dependent accretion rate in comparison to constant, averaged rates. Initial results suggest that using a constant, average accretion rate between bursts may underestimate the recurrence time when the accretion rate is decreasing, and overestimate it when the accretion rate is increasing. Our model, with an accreted hydrogen fraction of X = 0.44 and a CNO metallicity of ZCNO = 0.02, reproduces the observed burst arrival times and fluences with root mean square (rms) errors of 2.8 h, and 0.11× 10^{-6} erg cm^{-2}, respectively. Our results support previous modelling that predicted two unobserved bursts and indicate that additional bursts were also missed by observations.

  10. The Australian Computational Earth Systems Simulator

    Science.gov (United States)

    Mora, P.; Muhlhaus, H.; Lister, G.; Dyskin, A.; Place, D.; Appelbe, B.; Nimmervoll, N.; Abramson, D.

    2001-12-01

    Numerical simulation of the physics and dynamics of the entire earth system offers an outstanding opportunity for advancing earth system science and technology but represents a major challenge due to the range of scales and physical processes involved, as well as the magnitude of the software engineering effort required. However, new simulation and computer technologies are bringing this objective within reach. Under a special competitive national funding scheme to establish new Major National Research Facilities (MNRF), the Australian government together with a consortium of Universities and research institutions have funded construction of the Australian Computational Earth Systems Simulator (ACcESS). The Simulator or computational virtual earth will provide the research infrastructure to the Australian earth systems science community required for simulations of dynamical earth processes at scales ranging from microscopic to global. It will consist of thematic supercomputer infrastructure and an earth systems simulation software system. The Simulator models and software will be constructed over a five year period by a multi-disciplinary team of computational scientists, mathematicians, earth scientists, civil engineers and software engineers. The construction team will integrate numerical simulation models (3D discrete elements/lattice solid model, particle-in-cell large deformation finite-element method, stress reconstruction models, multi-scale continuum models etc) with geophysical, geological and tectonic models, through advanced software engineering and visualization technologies. When fully constructed, the Simulator aims to provide the software and hardware infrastructure needed to model solid earth phenomena including global scale dynamics and mineralisation processes, crustal scale processes including plate tectonics, mountain building, interacting fault system dynamics, and micro-scale processes that control the geological, physical and dynamic

  11. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  12. Discrete event systems in dioid algebra and conventional algebra

    CERN Document Server

    Declerck, Philippe

    2013-01-01

    This book concerns the use of dioid algebra as (max, +) algebra to treat the synchronization of tasks expressed by the maximum of the ends of the tasks conditioning the beginning of another task - a criterion of linear programming. A classical example is the departure time of a train which should wait for the arrival of other trains in order to allow for the changeover of passengers.The content focuses on the modeling of a class of dynamic systems usually called "discrete event systems" where the timing of the events is crucial. Events are viewed as sudden changes in a process which i

  13. Modelling and Simulating multi-echelon food systems

    NARCIS (Netherlands)

    Vorst, van der J.G.A.J.; Beulens, A.J.M.; Beek, van P.

    2000-01-01

    This paper presents a method for modelling the dynamic behaviour of food supply chains and evaluating alternative designs of the supply chain by applying discrete-event simulation. The modelling method is based on the concepts of business processes, design variables at strategic and operational

  14. Simulation of warehousing and distribution systems

    Directory of Open Access Journals (Sweden)

    Drago Pupavac

    2005-08-01

    Full Text Available The modern world abounds in simulation models. Thousands of organizations use simulation models to solve business problems. Problems in micro logistics systems are a very important segment of the business problems that can be solved by a simulation method. In most cases logistics simulation models should be developed with a purpose to evaluate the performance of individual value-adding indirect resources of logistics system, their possibilities and operational advantages as well as the flow of logistics entities between the plants, warehouses, and customers. Accordingly, this scientific paper elaborates concisely the theoretical characteristics of simulation models and the domains in which the simulation approach is best suited in logistics. Special attention is paid to simulation modeling of warehousing and distribution subsystems of logistic system and there is an example of spreadsheet application in the function of simulated demand for goods from warehouse. Apart from simulation model induction and deduction methods, the description method and a method of information modeling are applied.

  15. The impact of interoperability of electronic health records on ambulatory physician practices: a discrete-event simulation study

    Directory of Open Access Journals (Sweden)

    Yuan Zhou

    2014-02-01

    Full Text Available Background The effect of health information technology (HIT on efficiency and workload among clinical and nonclinical staff has been debated, with conflicting evidence about whether electronic health records (EHRs increase or decrease effort. None of this paper to date, however, examines the effect of interoperability quantitatively using discrete event simulation techniques.Objective To estimate the impact of EHR systems with various levels of interoperability on day-to-day tasks and operations of ambulatory physician offices.Methods Interviews and observations were used to collect workflow data from 12 adult primary and specialty practices. A discrete event simulation model was constructed to represent patient flows and clinical and administrative tasks of physicians and staff members.Results High levels of EHR interoperability were associated with reduced time spent by providers on four tasks: preparing lab reports, requesting lab orders, prescribing medications, and writing referrals. The implementation of an EHR was associated with less time spent by administrators but more time spent by physicians, compared with time spent at paper-based practices. In addition, the presence of EHRs and of interoperability did not significantly affect the time usage of registered nurses or the total visit time and waiting time of patients.Conclusion This paper suggests that the impact of using HIT on clinical and nonclinical staff work efficiency varies, however, overall it appears to improve time efficiency more for administrators than for physicians and nurses.

  16. Address-event-based platform for bioinspired spiking systems

    Science.gov (United States)

    Jiménez-Fernández, A.; Luján, C. D.; Linares-Barranco, A.; Gómez-Rodríguez, F.; Rivas, M.; Jiménez, G.; Civit, A.

    2007-05-01

    Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows a real-time virtual massive connectivity between huge number neurons, located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate "events" according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. When building multi-chip muti-layered AER systems, it is absolutely necessary to have a computer interface that allows (a) reading AER interchip traffic into the computer and visualizing it on the screen, and (b) converting conventional frame-based video stream in the computer into AER and injecting it at some point of the AER structure. This is necessary for test and debugging of complex AER systems. In the other hand, the use of a commercial personal computer implies to depend on software tools and operating systems that can make the system slower and un-robust. This paper addresses the problem of communicating several AER based chips to compose a powerful processing system. The problem was discussed in the Neuromorphic Engineering Workshop of 2006. The platform is based basically on an embedded computer, a powerful FPGA and serial links, to make the system faster and be stand alone (independent from a PC). A new platform is presented that allow to connect up to eight AER based chips to a Spartan 3 4000 FPGA. The FPGA is responsible of the network communication based in Address-Event and, at the same time, to map and transform the address space of the traffic to implement a pre-processing. A MMU microprocessor (Intel XScale 400MHz Gumstix Connex computer) is also connected to the FPGA

  17. Core discrete event simulation model for the evaluation of health care technologies in major depressive disorder.

    Science.gov (United States)

    Vataire, Anne-Lise; Aballéa, Samuel; Antonanzas, Fernando; Roijen, Leona Hakkaart-van; Lam, Raymond W; McCrone, Paul; Persson, Ulf; Toumi, Mondher

    2014-03-01

    A review of existing economic models in major depressive disorder (MDD) highlighted the need for models with longer time horizons that also account for heterogeneity in treatment pathways between patients. A core discrete event simulation model was developed to estimate health and cost outcomes associated with alternative treatment strategies. This model simulated short- and long-term clinical events (partial response, remission, relapse, recovery, and recurrence), adverse events, and treatment changes (titration, switch, addition, and discontinuation) over up to 5 years. Several treatment pathways were defined on the basis of fictitious antidepressants with three levels of efficacy, tolerability, and price (low, medium, and high) from first line to third line. The model was populated with input data from the literature for the UK setting. Model outputs include time in different health states, quality-adjusted life-years (QALYs), and costs from National Health Service and societal perspectives. The codes are open source. Predicted costs and QALYs from this model are within the range of results from previous economic evaluations. The largest cost components from the payer perspective were physician visits and hospitalizations. Key parameters driving the predicted costs and QALYs were utility values, effectiveness, and frequency of physician visits. Differences in QALYs and costs between two strategies with different effectiveness increased approximately twofold when the time horizon increased from 1 to 5 years. The discrete event simulation model can provide a more comprehensive evaluation of different therapeutic options in MDD, compared with existing Markov models, and can be used to compare a wide range of health care technologies in various groups of patients with MDD. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  18. A General Simulation Framework for Supply Chain Modeling: State of the Art and Case Study

    OpenAIRE

    Antonio Cimino; Francesco Longo; Giovanni Mirabelli

    2010-01-01

    Nowadays there is a large availability of discrete event simulation software that can be easily used in different domains: from industry to supply chain, from healthcare to business management, from training to complex systems design. Simulation engines of commercial discrete event simulation software use specific rules and logics for simulation time and events management. Difficulties and limitations come up when commercial discrete event simulation software are used for modeling complex rea...

  19. Multi-agent systems simulation and applications

    CERN Document Server

    Uhrmacher, Adelinde M

    2009-01-01

    Methodological Guidelines for Modeling and Developing MAS-Based SimulationsThe intersection of agents, modeling, simulation, and application domains has been the subject of active research for over two decades. Although agents and simulation have been used effectively in a variety of application domains, much of the supporting research remains scattered in the literature, too often leaving scientists to develop multi-agent system (MAS) models and simulations from scratch. Multi-Agent Systems: Simulation and Applications provides an overdue review of the wide ranging facets of MAS simulation, i

  20. HVDC System Characteristics and Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Moon, S.I.; Han, B.M.; Jang, G.S. [Electric Enginnering and Science Research Institute, Seoul (Korea)

    2001-07-01

    This report deals with the AC-DC power system simulation method by PSS/E and EUROSTAG for the development of a strategy for the reliable operation of the Cheju-Haenam interconnected system. The simulation using both programs is performed to analyze HVDC simulation models. In addition, the control characteristics of the Cheju-Haenam HVDC system as well as Cheju AC system characteristics are described in this work. (author). 104 figs., 8 tabs.

  1. The effects of indoor environmental exposures on pediatric asthma: a discrete event simulation model

    Directory of Open Access Journals (Sweden)

    Fabian M Patricia

    2012-09-01

    Full Text Available Abstract Background In the United States, asthma is the most common chronic disease of childhood across all socioeconomic classes and is the most frequent cause of hospitalization among children. Asthma exacerbations have been associated with exposure to residential indoor environmental stressors such as allergens and air pollutants as well as numerous additional factors. Simulation modeling is a valuable tool that can be used to evaluate interventions for complex multifactorial diseases such as asthma but in spite of its flexibility and applicability, modeling applications in either environmental exposures or asthma have been limited to date. Methods We designed a discrete event simulation model to study the effect of environmental factors on asthma exacerbations in school-age children living in low-income multi-family housing. Model outcomes include asthma symptoms, medication use, hospitalizations, and emergency room visits. Environmental factors were linked to percent predicted forced expiratory volume in 1 second (FEV1%, which in turn was linked to risk equations for each outcome. Exposures affecting FEV1% included indoor and outdoor sources of NO2 and PM2.5, cockroach allergen, and dampness as a proxy for mold. Results Model design parameters and equations are described in detail. We evaluated the model by simulating 50,000 children over 10 years and showed that pollutant concentrations and health outcome rates are comparable to values reported in the literature. In an application example, we simulated what would happen if the kitchen and bathroom exhaust fans were improved for the entire cohort, and showed reductions in pollutant concentrations and healthcare utilization rates. Conclusions We describe the design and evaluation of a discrete event simulation model of pediatric asthma for children living in low-income multi-family housing. Our model simulates the effect of environmental factors (combustion pollutants and allergens

  2. Module-based Simulation System for efficient development of nuclear simulation programs

    International Nuclear Information System (INIS)

    Yoshikawa, Hidekazu; Wakabayashi, Jiro

    1990-01-01

    Module-based Simulation System (MSS) has been developed to realize a new software environment enabling versatile dynamic simulation of a complex nuclear power plant system flexibly. Described in the paper are (i) fundamental methods utilized in MMS and its software systemization, (ii) development of human interface system to help users in generating integrated simulation programs automatically, and (iii) development of an intelligent user support system for helping users in the two phases of automatical semantic diagnosis and consultation to automatic input data setup for the MSS-generated programs. (author)

  3. A Multi-Agent Approach to the Simulation of Robotized Manufacturing Systems

    Science.gov (United States)

    Foit, K.; Gwiazda, A.; Banaś, W.

    2016-08-01

    The recent years of eventful industry development, brought many competing products, addressed to the same market segment. The shortening of a development cycle became a necessity if the company would like to be competitive. Because of switching to the Intelligent Manufacturing model the industry search for new scheduling algorithms, while the traditional ones do not meet the current requirements. The agent-based approach has been considered by many researchers as an important way of evolution of modern manufacturing systems. Due to the properties of the multi-agent systems, this methodology is very helpful during creation of the model of production system, allowing depicting both processing and informational part. The complexity of such approach makes the analysis impossible without the computer assistance. Computer simulation still uses a mathematical model to recreate a real situation, but nowadays the 2D or 3D virtual environments or even virtual reality have been used for realistic illustration of the considered systems. This paper will focus on robotized manufacturing system and will present the one of possible approaches to the simulation of such systems. The selection of multi-agent approach is motivated by the flexibility of this solution that offers the modularity, robustness and autonomy.

  4. Simulating neural systems with Xyce.

    Energy Technology Data Exchange (ETDEWEB)

    Schiek, Richard Louis; Thornquist, Heidi K.; Mei, Ting; Warrender, Christina E.; Aimone, James Bradley; Teeter, Corinne; Duda, Alex M.

    2012-12-01

    Sandias parallel circuit simulator, Xyce, can address large scale neuron simulations in a new way extending the range within which one can perform high-fidelity, multi-compartment neuron simulations. This report documents the implementation of neuron devices in Xyce, their use in simulation and analysis of neuron systems.

  5. Simulation of extreme rainfall event of November 2009 over Jeddah, Saudi Arabia: the explicit role of topography and surface heating

    Science.gov (United States)

    Almazroui, Mansour; Raju, P. V. S.; Yusef, A.; Hussein, M. A. A.; Omar, M.

    2018-04-01

    In this paper, a nonhydrostatic Weather Research and Forecasting (WRF) model has been used to simulate the extreme precipitation event of 25 November 2009, over Jeddah, Saudi Arabia. The model is integrated in three nested (27, 9, and 3 km) domains with the initial and boundary forcing derived from the NCEP reanalysis datasets. As a control experiment, the model integrated for 48 h initiated at 0000 UTC on 24 November 2009. The simulated rainfall in the control experiment depicts in well agreement with Tropical Rainfall Measurement Mission rainfall estimates in terms of intensity as well as spatio-temporal distribution. Results indicate that a strong low-level (850 hPa) wind over Jeddah and surrounding regions enhanced the moisture and temperature gradient and created a conditionally unstable atmosphere that favored the development of the mesoscale system. The influences of topography and heat exchange process in the atmosphere were investigated on the development of extreme precipitation event; two sensitivity experiments are carried out: one without topography and another without exchange of surface heating to the atmosphere. The results depict that both surface heating and topography played crucial role in determining the spatial distribution and intensity of the extreme rainfall over Jeddah. The topography favored enhanced uplift motion that further strengthened the low-level jet and hence the rainfall over Jeddah and adjacent areas. On the other hand, the absence of surface heating considerably reduced the simulated rainfall by 30% as compared to the observations.

  6. A simulation study of capacity utilization to predict future capacity for manufacturing system sustainability

    Science.gov (United States)

    Rimo, Tan Hauw Sen; Chai Tin, Ong

    2017-12-01

    Capacity utilization (CU) measurement is an important task in a manufacturing system, especially in make-to-order (MTO) type manufacturing system with product customization, in predicting capacity to meet future demand. A stochastic discrete-event simulation is developed using ARENA software to determine CU and capacity gap (CG) in short run production function. This study focused on machinery breakdown and product defective rate as random variables in the simulation. The study found that the manufacturing system run in 68.01% CU and 31.99% CG. It is revealed that machinery breakdown and product defective rate have a direct relationship with CU. By improving product defective rate into zero defect, manufacturing system can improve CU up to 73.56% and CG decrease to 26.44%. While improving machinery breakdown into zero breakdowns will improve CU up to 93.99% and the CG decrease to 6.01%. This study helps operation level to study CU using “what-if” analysis in order to meet future demand in more practical and easier method by using simulation approach. Further study is recommended by including other random variables that affect CU to make the simulation closer with the real-life situation for a better decision.

  7. News and Events - Nanodelivery Systems and Devices Branch

    Science.gov (United States)

    The latest news from the Nanodelivery Systems and Devices Branch and the Alliance, as well as upcoming and past events attended by the Nanodelivery Systems and Devices Branchstaff, and relevant upcoming scientific meetings.

  8. EVENT, Explosive Transients in Flow Networks

    International Nuclear Information System (INIS)

    Andrae, R.W.; Tang, P.K.; Bolstad, J.W.; Gregory, W.S.

    1985-01-01

    1 - Description of problem or function: A major concern of the chemical, nuclear, and mining industries is the occurrence of an explosion in one part of a facility and subsequent transmission of explosive effects through the ventilation system. An explosive event can cause performance degradation of the ventilation system or even structural failures. A more serious consequence is the release of hazardous materials to the environment if vital protective devices such as air filters, are damaged. EVENT was developed to investigate the effects of explosive transients through fluid-flow networks. Using the principles of fluid mechanics and thermodynamics, governing equations for the conservation of mass, energy, and momentum are formulated. These equations are applied to the complete network subdivided into two general components: nodes and branches. The nodes represent boundaries and internal junctions where the conservation of mass and energy applies. The branches can be ducts, valves, blowers, or filters. Since in EVENT the effect of the explosion, not the characteristics of the explosion itself, is of interest, the transient is simulated in the simplest possible way. A rapid addition of mass and energy to the system at certain locations is used. This representation is adequate for all of the network except the region where the explosion actually occurs. EVENT84 is a modification of EVENT which includes a new explosion chamber model subroutine based on the NOL BLAST program developed at the Naval Ordnance Laboratory, Silver Spring, Maryland. This subroutine calculates the confined explosion near-field parameters and supplies the time functions of energy and mass injection. Solid-phase or TNT-equivalent explosions (which simulate 'point source' explosions in nuclear facilities) as well as explosions in gas-air mixtures can be simulated. The four types of explosions EVENT84 simulates are TNT, hydrogen in air, acetylene in air, and tributyl phosphate (TBP or 'red oil

  9. Modelling machine ensembles with discrete event dynamical system theory

    Science.gov (United States)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  10. Integrating Continuous-Time and Discrete-Event Concepts in Process Modelling, Simulation and Control

    NARCIS (Netherlands)

    Beek, van D.A.; Gordijn, S.H.F.; Rooda, J.E.; Ertas, A.

    1995-01-01

    Currently, modelling of systems in the process industry requires the use of different specification languages for the specification of the discrete-event and continuous-time subsystems. In this way, models are restricted to individual subsystems of either a continuous-time or discrete-event nature.

  11. A Risk Assessment System with Automatic Extraction of Event Types

    Science.gov (United States)

    Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula

    In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.

  12. Logical Discrete Event Systems in a trace theory based setting

    NARCIS (Netherlands)

    Smedinga, R.

    1993-01-01

    Discrete event systems can be modelled using a triple consisting of some alphabet (representing the events that might occur), and two trace sets (sets of possible strings) denoting the possible behaviour and the completed tasks of the system. Using this definition we are able to formulate and solve

  13. Web-based online system for recording and examing of events in power plants

    International Nuclear Information System (INIS)

    Seyd Farshi, S.; Dehghani, M.

    2004-01-01

    Occurrence of events in power plants could results in serious drawbacks in generation of power. This suggests high degree of importance for online recording and examing of events. In this paper an online web-based system is introduced, which records and examines events in power plants. Throughout the paper, procedures for design and implementation of this system, its features and results gained are explained. this system provides predefined level of online access to all data of events for all its users in power plants, dispatching, regional utilities and top-level managers. By implementation of electric power industry intranet, an expandable modular system to be used in different sectors of industry is offered. Web-based online recording and examing system for events offers the following advantages: - Online recording of events in power plants. - Examing of events in regional utilities. - Access to event' data. - Preparing managerial reports

  14. Supply chain simulation tools and techniques: a survey

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2005-01-01

    The main contribution of this paper is twofold: it surveys different types of simulation for supply chain management; it discusses several methodological issues. These different types of simulation are spreadsheet simulation, system dynamics, discrete-event simulation and business games. Which

  15. Modeling and simulation of large HVDC systems

    Energy Technology Data Exchange (ETDEWEB)

    Jin, H.; Sood, V.K.

    1993-01-01

    This paper addresses the complexity and the amount of work in preparing simulation data and in implementing various converter control schemes and the excessive simulation time involved in modelling and simulation of large HVDC systems. The Power Electronic Circuit Analysis program (PECAN) is used to address these problems and a large HVDC system with two dc links is simulated using PECAN. A benchmark HVDC system is studied to compare the simulation results with those from other packages. The simulation time and results are provided in the paper.

  16. A Discrete Event Simulation Model for Evaluating the Performances of an M/G/C/C State Dependent Queuing System

    Science.gov (United States)

    Khalid, Ruzelan; M. Nawawi, Mohd Kamal; Kawsar, Luthful A.; Ghani, Noraida A.; Kamil, Anton A.; Mustafa, Adli

    2013-01-01

    M/G/C/C state dependent queuing networks consider service rates as a function of the number of residing entities (e.g., pedestrians, vehicles, and products). However, modeling such dynamic rates is not supported in modern Discrete Simulation System (DES) software. We designed an approach to cater this limitation and used it to construct the M/G/C/C state-dependent queuing model in Arena software. Using the model, we have evaluated and analyzed the impacts of various arrival rates to the throughput, the blocking probability, the expected service time and the expected number of entities in a complex network topology. Results indicated that there is a range of arrival rates for each network where the simulation results fluctuate drastically across replications and this causes the simulation results and analytical results exhibit discrepancies. Detail results that show how tally the simulation results and the analytical results in both abstract and graphical forms and some scientific justifications for these have been documented and discussed. PMID:23560037

  17. Multiscale models and stochastic simulation methods for computing rare but key binding events in cell biology

    Energy Technology Data Exchange (ETDEWEB)

    Guerrier, C. [Applied Mathematics and Computational Biology, IBENS, Ecole Normale Supérieure, 46 rue d' Ulm, 75005 Paris (France); Holcman, D., E-mail: david.holcman@ens.fr [Applied Mathematics and Computational Biology, IBENS, Ecole Normale Supérieure, 46 rue d' Ulm, 75005 Paris (France); Mathematical Institute, Oxford OX2 6GG, Newton Institute (United Kingdom)

    2017-07-01

    The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.

  18. Multiscale models and stochastic simulation methods for computing rare but key binding events in cell biology

    International Nuclear Information System (INIS)

    Guerrier, C.; Holcman, D.

    2017-01-01

    The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationally greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.

  19. Simulating spontaneous aseismic and seismic slip events on evolving faults

    Science.gov (United States)

    Herrendörfer, Robert; van Dinther, Ylona; Pranger, Casper; Gerya, Taras

    2017-04-01

    Plate motion along tectonic boundaries is accommodated by different slip modes: steady creep, seismic slip and slow slip transients. Due to mainly indirect observations and difficulties to scale results from laboratory experiments to nature, it remains enigmatic which fault conditions favour certain slip modes. Therefore, we are developing a numerical modelling approach that is capable of simulating different slip modes together with the long-term fault evolution in a large-scale tectonic setting. We extend the 2D, continuum mechanics-based, visco-elasto-plastic thermo-mechanical model that was designed to simulate slip transients in large-scale geodynamic simulations (van Dinther et al., JGR, 2013). We improve the numerical approach to accurately treat the non-linear problem of plasticity (see also EGU 2017 abstract by Pranger et al.). To resolve a wide slip rate spectrum on evolving faults, we develop an invariant reformulation of the conventional rate-and-state dependent friction (RSF) and adapt the time step (Lapusta et al., JGR, 2000). A crucial part of this development is a conceptual ductile fault zone model that relates slip rates along discrete planes to the effective macroscopic plastic strain rates in the continuum. We test our implementation first in a simple 2D setup with a single fault zone that has a predefined initial thickness. Results show that deformation localizes in case of steady creep and for very slow slip transients to a bell-shaped strain rate profile across the fault zone, which suggests that a length scale across the fault zone may exist. This continuum length scale would overcome the common mesh-dependency in plasticity simulations and question the conventional treatment of aseismic slip on infinitely thin fault zones. We test the introduction of a diffusion term (similar to the damage description in Lyakhovsky et al., JMPS, 2011) into the state evolution equation and its effect on (de-)localization during faster slip events. We compare

  20. A Multiprocessor Operating System Simulator

    Science.gov (United States)

    Johnston, Gary M.; Campbell, Roy H.

    1988-01-01

    This paper describes a multiprocessor operating system simulator that was developed by the authors in the Fall semester of 1987. The simulator was built in response to the need to provide students with an environment in which to build and test operating system concepts as part of the coursework of a third-year undergraduate operating systems course. Written in C++, the simulator uses the co-routine style task package that is distributed with the AT&T C++ Translator to provide a hierarchy of classes that represents a broad range of operating system software and hardware components. The class hierarchy closely follows that of the 'Choices' family of operating systems for loosely- and tightly-coupled multiprocessors. During an operating system course, these classes are refined and specialized by students in homework assignments to facilitate experimentation with different aspects of operating system design and policy decisions. The current implementation runs on the IBM RT PC under 4.3bsd UNIX.

  1. Improvements to information management systems simulator

    Science.gov (United States)

    Bilek, R. W.

    1972-01-01

    The performance of personnel in the augmentation and improvement of the interactive IMSIM information management simulation model is summarized. With this augmented model, NASA now has even greater capabilities for the simulation of computer system configurations, data processing loads imposed on these configurations, and executive software to control system operations. Through these simulations, NASA has an extremely cost effective capability for the design and analysis of computer-based data management systems.

  2. The Monte Carlo event generator DPMJET-III

    International Nuclear Information System (INIS)

    Roesler, S.; Engel, R.

    2001-01-01

    A new version of the Monte Carlo event generator DPMJET is presented. It is a code system based on the Dual Parton Model and unifies all features of the DTUNUC-2, DPMJET-II and PHOJET1.12 event generators. DPMJET-III allows the simulation of hadron-hadron, hadron-nucleus, nucleus-nucleus, photon-hadron, photon-photon and photon-nucleus interactions from a few GeV up to the highest cosmic ray energies. (orig.)

  3. Hybrid simulation models of production networks

    CERN Document Server

    Kouikoglou, Vassilis S

    2001-01-01

    This book is concerned with a most important area of industrial production, that of analysis and optimization of production lines and networks using discrete-event models and simulation. The book introduces a novel approach that combines analytic models and discrete-event simulation. Unlike conventional piece-by-piece simulation, this method observes a reduced number of events between which the evolution of the system is tracked analytically. Using this hybrid approach, several models are developed for the analysis of production lines and networks. The hybrid approach combines speed and accuracy for exceptional analysis of most practical situations. A number of optimization problems, involving buffer design, workforce planning, and production control, are solved through the use of hybrid models.

  4. Development of the simulation system {open_quotes}IMPACT{close_quotes} for analysis of nuclear power plant severe accidents

    Energy Technology Data Exchange (ETDEWEB)

    Naitoh, Masanori; Ujita, Hiroshi; Nagumo, Hiroichi [Nuclear Power Corp. (Japan)] [and others

    1997-07-01

    The Nuclear Power Engineering Corporation (NUPEC) has initiated a long-term program to develop the simulation system {open_quotes}IMPACT{close_quotes} for analysis of hypothetical severe accidents in nuclear power plants. IMPACT employs advanced methods of physical modeling and numerical computation, and can simulate a wide spectrum of senarios ranging from normal operation to hypothetical, beyond-design-basis-accident events. Designed as a large-scale system of interconnected, hierarchical modules, IMPACT`s distinguishing features include mechanistic models based on first principles and high speed simulation on parallel processing computers. The present plan is a ten-year program starting from 1993, consisting of the initial one-year of preparatory work followed by three technical phases: Phase-1 for development of a prototype system; Phase-2 for completion of the simulation system, incorporating new achievements from basic studies; and Phase-3 for refinement through extensive verification and validation against test results and available real plant data.

  5. Building a parallel file system simulator

    International Nuclear Information System (INIS)

    Molina-Estolano, E; Maltzahn, C; Brandt, S A; Bent, J

    2009-01-01

    Parallel file systems are gaining in popularity in high-end computing centers as well as commercial data centers. High-end computing systems are expected to scale exponentially and to pose new challenges to their storage scalability in terms of cost and power. To address these challenges scientists and file system designers will need a thorough understanding of the design space of parallel file systems. Yet there exist few systematic studies of parallel file system behavior at petabyte- and exabyte scale. An important reason is the significant cost of getting access to large-scale hardware to test parallel file systems. To contribute to this understanding we are building a parallel file system simulator that can simulate parallel file systems at very large scale. Our goal is to simulate petabyte-scale parallel file systems on a small cluster or even a single machine in reasonable time and fidelity. With this simulator, file system experts will be able to tune existing file systems for specific workloads, scientists and file system deployment engineers will be able to better communicate workload requirements, file system designers and researchers will be able to try out design alternatives and innovations at scale, and instructors will be able to study very large-scale parallel file system behavior in the class room. In this paper we describe our approach and provide preliminary results that are encouraging both in terms of fidelity and simulation scalability.

  6. Classification of Single-Trial Auditory Events Using Dry-Wireless EEG During Real and Motion Simulated Flight

    Directory of Open Access Journals (Sweden)

    Daniel eCallan

    2015-02-01

    Full Text Available Application of neuro-augmentation technology based on dry-wireless EEG may be considerably beneficial for aviation and space operations because of the inherent dangers involved. In this study we evaluate classification performance of perceptual events using a dry-wireless EEG system during motion platform based flight simulation and actual flight in an open cockpit biplane to determine if the system can be used in the presence of considerable environmental and physiological artifacts. A passive task involving 200 random auditory presentations of a chirp sound was used for evaluation. The advantage of this auditory task is that it does not interfere with the perceptual motor processes involved with piloting the plane. Classification was based on identifying the presentation of a chirp sound versus silent periods. Evaluation of Independent component analysis and Kalman filtering to enhance classification performance by extracting brain activity related to the auditory event from other non-task related brain activity and artifacts was assessed. The results of permutation testing revealed that single trial classification of presence or absence of an auditory event was significantly above chance for all conditions on a novel test set. The best performance could be achieved with both ICA and Kalman filtering relative to no processing: Platform Off (83.4% vs 78.3%, Platform On (73.1% vs 71.6%, Biplane Engine Off (81.1% vs 77.4%, and Biplane Engine On (79.2% vs 66.1%. This experiment demonstrates that dry-wireless EEG can be used in environments with considerable vibration, wind, acoustic noise, and physiological artifacts and achieve good single trial classification performance that is necessary for future successful application of neuro-augmentation technology based on brain-machine interfaces.

  7. Simulation of the Tornado Event of 22 March, 2013 over ...

    Indian Academy of Sciences (India)

    2013-03-22

    Mar 22, 2013 ... nagar and Akhaura upazila of Brahmanbaria district (DMIC 2013). Other damages of this tor- nado event were the damages and/or collapses of electric lines and poles, boundary wall, entrance gate, communication systems, breaking down of numerous trees, etc. The location of Brahmanbaria. (23.95.

  8. Multi-objective optimisation with stochastic discrete-event simulation in retail banking: a case study

    Directory of Open Access Journals (Sweden)

    E Scholtz

    2012-12-01

    Full Text Available The cash management of an autoteller machine (ATM is a multi-objective optimisation problem which aims to maximise the service level provided to customers at minimum cost. This paper focus on improved cash management in a section of the South African retail banking industry, for which a decision support system (DSS was developed. This DSS integrates four Operations Research (OR methods: the vehicle routing problem (VRP, the continuous review policy for inventory management, the knapsack problem and stochastic, discrete-event simulation. The DSS was applied to an ATM network in the Eastern Cape, South Africa, to investigate 90 different scenarios. Results show that the application of a formal vehicle routing method consistently yields higher service levels at lower cost when compared to two other routing approaches, in conjunction with selected ATM reorder levels and a knapsack-based notes dispensing algorithm. It is concluded that the use of vehicle routing methods is especially beneficial when the bank has substantial control over transportation cost.

  9. Density scaling and quasiuniversality of flow-event statistics for athermal plastic flows

    DEFF Research Database (Denmark)

    Lerner, Edan; Bailey, Nicholas; Dyre, J. C.

    2014-01-01

    Athermal steady-state plastic flows were simulated for the Kob-Andersen binary Lennard-Jones system and its repulsive version in which the sign of the attractive terms is changed to a plus. Properties evaluated include the distributions of energy drops, stress drops, and strain intervals between...... the flow events. We show that simulations at a single density in conjunction with an equilibrium-liquid simulation at the same density allow one to predict the plastic flow-event statistics at other densities. This is done by applying the recently established “hidden scale invariance” of simple liquids...

  10. Control of discrete-event systems with modular or distributed structure

    Czech Academy of Sciences Publication Activity Database

    Komenda, Jan; van Schuppen, J. H.

    2007-01-01

    Roč. 388, č. 3 (2007), s. 199-226 ISSN 0304-3975 R&D Projects: GA AV ČR(CZ) KJB100190609 Institutional research plan: CEZ:AV0Z10190503 Keywords : supervisory control * modular discrete-event system * distributed discrete-event system Subject RIV: BA - General Mathematics Impact factor: 0.735, year: 2007

  11. Hierarchical Discrete Event Supervisory Control of Aircraft Propulsion Systems

    Science.gov (United States)

    Yasar, Murat; Tolani, Devendra; Ray, Asok; Shah, Neerav; Litt, Jonathan S.

    2004-01-01

    This paper presents a hierarchical application of Discrete Event Supervisory (DES) control theory for intelligent decision and control of a twin-engine aircraft propulsion system. A dual layer hierarchical DES controller is designed to supervise and coordinate the operation of two engines of the propulsion system. The two engines are individually controlled to achieve enhanced performance and reliability, necessary for fulfilling the mission objectives. Each engine is operated under a continuously varying control system that maintains the specified performance and a local discrete-event supervisor for condition monitoring and life extending control. A global upper level DES controller is designed for load balancing and overall health management of the propulsion system.

  12. Operating system for a real-time multiprocessor propulsion system simulator

    Science.gov (United States)

    Cole, G. L.

    1984-01-01

    The success of the Real Time Multiprocessor Operating System (RTMPOS) in the development and evaluation of experimental hardware and software systems for real time interactive simulation of air breathing propulsion systems was evaluated. The Real Time Multiprocessor Operating System (RTMPOS) provides the user with a versatile, interactive means for loading, running, debugging and obtaining results from a multiprocessor based simulator. A front end processor (FEP) serves as the simulator controller and interface between the user and the simulator. These functions are facilitated by the RTMPOS which resides on the FEP. The RTMPOS acts in conjunction with the FEP's manufacturer supplied disk operating system that provides typical utilities like an assembler, linkage editor, text editor, file handling services, etc. Once a simulation is formulated, the RTMPOS provides for engineering level, run time operations such as loading, modifying and specifying computation flow of programs, simulator mode control, data handling and run time monitoring. Run time monitoring is a powerful feature of RTMPOS that allows the user to record all actions taken during a simulation session and to receive advisories from the simulator via the FEP. The RTMPOS is programmed mainly in PASCAL along with some assembly language routines. The RTMPOS software is easily modified to be applicable to hardware from different manufacturers.

  13. Towards real-time regional earthquake simulation I: real-time moment tensor monitoring (RMT) for regional events in Taiwan

    Science.gov (United States)

    Lee, Shiann-Jong; Liang, Wen-Tzong; Cheng, Hui-Wen; Tu, Feng-Shan; Ma, Kuo-Fong; Tsuruoka, Hiroshi; Kawakatsu, Hitoshi; Huang, Bor-Shouh; Liu, Chun-Chi

    2014-01-01

    We have developed a real-time moment tensor monitoring system (RMT) which takes advantage of a grid-based moment tensor inversion technique and real-time broad-band seismic recordings to automatically monitor earthquake activities in the vicinity of Taiwan. The centroid moment tensor (CMT) inversion technique and a grid search scheme are applied to obtain the information of earthquake source parameters, including the event origin time, hypocentral location, moment magnitude and focal mechanism. All of these source parameters can be determined simultaneously within 117 s after the occurrence of an earthquake. The monitoring area involves the entire Taiwan Island and the offshore region, which covers the area of 119.3°E to 123.0°E and 21.0°N to 26.0°N, with a depth from 6 to 136 km. A 3-D grid system is implemented in the monitoring area with a uniform horizontal interval of 0.1° and a vertical interval of 10 km. The inversion procedure is based on a 1-D Green's function database calculated by the frequency-wavenumber (fk) method. We compare our results with the Central Weather Bureau (CWB) catalogue data for earthquakes occurred between 2010 and 2012. The average differences between event origin time and hypocentral location are less than 2 s and 10 km, respectively. The focal mechanisms determined by RMT are also comparable with the Broadband Array in Taiwan for Seismology (BATS) CMT solutions. These results indicate that the RMT system is realizable and efficient to monitor local seismic activities. In addition, the time needed to obtain all the point source parameters is reduced substantially compared to routine earthquake reports. By connecting RMT with a real-time online earthquake simulation (ROS) system, all the source parameters will be forwarded to the ROS to make the real-time earthquake simulation feasible. The RMT has operated offline (2010-2011) and online (since January 2012 to present) at the Institute of Earth Sciences (IES), Academia Sinica

  14. System on chip module configured for event-driven architecture

    Science.gov (United States)

    Robbins, Kevin; Brady, Charles E.; Ashlock, Tad A.

    2017-10-17

    A system on chip (SoC) module is described herein, wherein the SoC modules comprise a processor subsystem and a hardware logic subsystem. The processor subsystem and hardware logic subsystem are in communication with one another, and transmit event messages between one another. The processor subsystem executes software actors, while the hardware logic subsystem includes hardware actors, the software actors and hardware actors conform to an event-driven architecture, such that the software actors receive and generate event messages and the hardware actors receive and generate event messages.

  15. Satellite data driven modeling system for predicting air quality and visibility during wildfire and prescribed burn events

    Science.gov (United States)

    Nair, U. S.; Keiser, K.; Wu, Y.; Maskey, M.; Berendes, D.; Glass, P.; Dhakal, A.; Christopher, S. A.

    2012-12-01

    The Alabama Forestry Commission (AFC) is responsible for wildfire control and also prescribed burn management in the state of Alabama. Visibility and air quality degradation resulting from smoke are two pieces of information that are crucial for this activity. Currently the tools available to AFC are the dispersion index available from the National Weather Service and also surface smoke concentrations. The former provides broad guidance for prescribed burning activities but does not provide specific information regarding smoke transport, areas affected and quantification of air quality and visibility degradation. While the NOAA operational air quality guidance includes surface smoke concentrations from existing fire events, it does not account for contributions from background aerosols, which are important for the southeastern region including Alabama. Also lacking is the quantification of visibility. The University of Alabama in Huntsville has developed a state-of-the-art integrated modeling system to address these concerns. This system based on the Community Air Quality Modeling System (CMAQ) that ingests satellite derived smoke emissions and also assimilates NASA MODIS derived aerosol optical thickness. In addition, this operational modeling system also simulates the impact of potential prescribed burn events based on location information derived from the AFC prescribed burn permit database. A lagrangian model is used to simulate smoke plumes for the prescribed burns requests. The combined air quality and visibility degradation resulting from these smoke plumes and background aerosols is computed and the information is made available through a web based decision support system utilizing open source GIS components. This system provides information regarding intersections between highways and other critical facilities such as old age homes, hospitals and schools. The system also includes satellite detected fire locations and other satellite derived datasets

  16. Event-Triggered Fault Detection of Nonlinear Networked Systems.

    Science.gov (United States)

    Li, Hongyi; Chen, Ziran; Wu, Ligang; Lam, Hak-Keung; Du, Haiping

    2017-04-01

    This paper investigates the problem of fault detection for nonlinear discrete-time networked systems under an event-triggered scheme. A polynomial fuzzy fault detection filter is designed to generate a residual signal and detect faults in the system. A novel polynomial event-triggered scheme is proposed to determine the transmission of the signal. A fault detection filter is designed to guarantee that the residual system is asymptotically stable and satisfies the desired performance. Polynomial approximated membership functions obtained by Taylor series are employed for filtering analysis. Furthermore, sufficient conditions are represented in terms of sum of squares (SOSs) and can be solved by SOS tools in MATLAB environment. A numerical example is provided to demonstrate the effectiveness of the proposed results.

  17. Using discrete event simulation to change from a functional layout to a cellular layout in an auto parts industry

    Directory of Open Access Journals (Sweden)

    Thiago Buselato Maurício

    2015-07-01

    Full Text Available This paper presents a discrete event simulation employed in a Brazilian automotive company. There was a huge waste caused by one family scrap. It was believed one reason was the company functional layout. In this case, changing from current to cellular layout, employee synergy and knowledge about this family would increase. Due to the complexity for dimensioning a new cellular layout, mainly because of batch size and client’s demand variation. In this case, discrete event simulation was used, which made possible to introduce those effects improving accuracy in final results. This accuracy will be shown by comparing results obtained with simulation and without it (as company used to do. To conclude, cellular layout was responsible for increasing 15% of productivity, reducing lead-time in 7 days and scrap in 15% for this family.

  18. SimPackJ/S: a web-oriented toolkit for discrete event simulation

    Science.gov (United States)

    Park, Minho; Fishwick, Paul A.

    2002-07-01

    SimPackJ/S is the JavaScript and Java version of SimPack, which means SimPackJ/S is a collection of JavaScript and Java libraries and executable programs for computer simulations. The main purpose of creating SimPackJ/S is that we allow existing SimPack users to expand simulation areas and provide future users with a freeware simulation toolkit to simulate and model a system in web environments. One of the goals for this paper is to introduce SimPackJ/S. The other goal is to propose translation rules for converting C to JavaScript and Java. Most parts demonstrate the translation rules with examples. In addition, we discuss a 3D dynamic system model and overview an approach to 3D dynamic systems using SimPackJ/S. We explain an interface between SimPackJ/S and the 3D language--Virtual Reality Modeling Language (VRML). This paper documents how to translate C to JavaScript and Java and how to utilize SimPackJ/S within a 3D web environment.

  19. Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters

    Science.gov (United States)

    Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana

    2016-02-01

    This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.

  20. Fine grained event processing on HPCs with the ATLAS Yoda system

    CERN Document Server

    Calafiura, Paolo; The ATLAS collaboration; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Tsulaia, Vakhtang; van Gemmeren, Peter; Wenaus, Torre

    2015-01-01

    High performance computing facilities present unique challenges and opportunities for HENP event processing. The massive scale of many HPC systems means that fractionally small utilizations can yield large returns in processing throughput. Parallel applications which can dynamically and efficiently fill any scheduling opportunities the resource presents benefit both the facility (maximal utilization) and the (compute-limited) science. The ATLAS Yoda system provides this capability to HENP-like event processing applications by implementing event-level processing in an MPI-based master-client model that integrates seamlessly with the more broadly scoped ATLAS Event Service. Fine grained, event level work assignments are intelligently dispatched to parallel workers to sustain full utilization on all cores, with outputs streamed off to destination object stores in near real time with similarly fine granularity, such that processing can proceed until termination with full utilization. The system offers the efficie...