WorldWideScience

Sample records for network simulation techniques

  1. Reliability assessment of restructured power systems using reliability network equivalent and pseudo-sequential simulation techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Yi; Wang, Peng; Goel, Lalit [Nanyang Technological University, School of Electrical and Electronics Engineering, Block S1, Nanyang Avenue, Singapore 639798 (Singapore); Billinton, Roy; Karki, Rajesh [Department of Electrical Engineering, University of Saskatchewan, Saskatoon (Canada)

    2007-10-15

    This paper presents a technique to evaluate reliability of a restructured power system with a bilateral market. The proposed technique is based on the combination of the reliability network equivalent and pseudo-sequential simulation approaches. The reliability network equivalent techniques have been implemented in the Monte Carlo simulation procedure to reduce the computational burden of the analysis. Pseudo-sequential simulation has been used to increase the computational efficiency of the non-sequential simulation method and to model the chronological aspects of market trading and system operation. Multi-state Markov models for generation and transmission systems are proposed and implemented in the simulation. A new load shedding scheme is proposed during generation inadequacy and network congestion to minimize the load curtailment. The IEEE reliability test system (RTS) is used to illustrate the technique. (author)

  2. Simulated Annealing Technique for Routing in a Rectangular Mesh Network

    Directory of Open Access Journals (Sweden)

    Noraziah Adzhar

    2014-01-01

    Full Text Available In the process of automatic design for printed circuit boards (PCBs, the phase following cell placement is routing. On the other hand, routing process is a notoriously difficult problem, and even the simplest routing problem which consists of a set of two-pin nets is known to be NP-complete. In this research, our routing region is first tessellated into a uniform Nx×Ny array of square cells. The ultimate goal for a routing problem is to achieve complete automatic routing with minimal need for any manual intervention. Therefore, shortest path for all connections needs to be established. While classical Dijkstra’s algorithm guarantees to find shortest path for a single net, each routed net will form obstacles for later paths. This will add complexities to route later nets and make its routing longer than the optimal path or sometimes impossible to complete. Today’s sequential routing often applies heuristic method to further refine the solution. Through this process, all nets will be rerouted in different order to improve the quality of routing. Because of this, we are motivated to apply simulated annealing, one of the metaheuristic methods to our routing model to produce better candidates of sequence.

  3. Reconstruction of chalk pore networks from 2D backscatter electron micrographs using a simulated annealing technique

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, M.S.; Torsaeter, O. [Department of Petroleum Engineering and Applied Geophysics, Norwegian University of Science and Technology, Trondheim (Norway)

    2002-05-01

    We report the stochastic reconstruction of chalk pore networks from limited morphological information that may be readily extracted from 2D backscatter electron (BSE) images of the pore space. The reconstruction technique employs a simulated annealing (SA) algorithm, which can be constrained by an arbitrary number of morphological descriptors. Backscatter electron images of a high-porosity North Sea chalk sample are analyzed and the morphological descriptors of the pore space are determined. The morphological descriptors considered are the void-phase two-point probability function and lineal path function computed with or without the application of periodic boundary conditions (PBC). 2D and 3D samples have been reconstructed with different combinations of the descriptors and the reconstructed pore networks have been analyzed quantitatively to evaluate the quality of reconstructions. The results demonstrate that simulated annealing technique may be used to reconstruct chalk pore networks with reasonable accuracy using the void-phase two-point probability function and/or void-phase lineal path function. Void-phase two-point probability function produces slightly better reconstruction than the void-phase lineal path function. Imposing void-phase lineal path function results in slight improvement over what is achieved by using the void-phase two-point probability function as the only constraint. Application of periodic boundary conditions appears to be not critically important when reasonably large samples are reconstructed.

  4. Simulating GPS radio signal to synchronize network--a new technique for redundant timing.

    Science.gov (United States)

    Shan, Qingxiao; Jun, Yang; Le Floch, Jean-Michel; Fan, Yaohui; Ivanov, Eugene N; Tobar, Michael E

    2014-07-01

    Currently, many distributed systems such as 3G mobile communications and power systems are time synchronized with a Global Positioning System (GPS) signal. If there is a GPS failure, it is difficult to realize redundant timing, and thus time-synchronized devices may fail. In this work, we develop time transfer by simulating GPS signals, which promises no extra modification to original GPS-synchronized devices. This is achieved by applying a simplified GPS simulator for synchronization purposes only. Navigation data are calculated based on a pre-assigned time at a fixed position. Pseudo-range data which describes the distance change between the space vehicle (SV) and users are calculated. Because real-time simulation requires heavy-duty computations, we use self-developed software optimized on a PC to generate data, and save the data onto memory disks while the simulator is operating. The radio signal generation is similar to the SV at an initial position, and the frequency synthesis of the simulator is locked to a pre-assigned time. A filtering group technique is used to simulate the signal transmission delay corresponding to the SV displacement. Each SV generates a digital baseband signal, where a unique identifying code is added to the signal and up-converted to generate the output radio signal at the centered frequency of 1575.42 MHz (L1 band). A prototype with a field-programmable gate array (FPGA) has been built and experiments have been conducted to prove that we can realize time transfer. The prototype has been applied to the CDMA network for a three-month long experiment. Its precision has been verified and can meet the requirements of most telecommunication systems.

  5. Packet Tracer network simulator

    CERN Document Server

    Jesin, A

    2014-01-01

    A practical, fast-paced guide that gives you all the information you need to successfully create networks and simulate them using Packet Tracer.Packet Tracer Network Simulator is aimed at students, instructors, and network administrators who wish to use this simulator to learn how to perform networking instead of investing in expensive, specialized hardware. This book assumes that you have a good amount of Cisco networking knowledge, and it will focus more on Packet Tracer rather than networking.

  6. A simulator to assess energy-saving techniques in content distribution networks

    NARCIS (Netherlands)

    Bostoen, T.; Napper, J.; Mullender, Sape J.; Berbers, Y.

    2013-01-01

    The scalable and bandwidth-efficient delivery of IPTV services to an increasingly diverse set of screens requires the deployment of telco content distribution networks (CDNs). These CDNs are composed of cache servers located in the telco's data centers close to the end user. The additional cache

  7. Network acceleration techniques

    Science.gov (United States)

    Crowley, Patricia (Inventor); Awrach, James Michael (Inventor); Maccabe, Arthur Barney (Inventor)

    2012-01-01

    Splintered offloading techniques with receive batch processing are described for network acceleration. Such techniques offload specific functionality to a NIC while maintaining the bulk of the protocol processing in the host operating system ("OS"). The resulting protocol implementation allows the application to bypass the protocol processing of the received data. Such can be accomplished this by moving data from the NIC directly to the application through direct memory access ("DMA") and batch processing the receive headers in the host OS when the host OS is interrupted to perform other work. Batch processing receive headers allows the data path to be separated from the control path. Unlike operating system bypass, however, the operating system still fully manages the network resource and has relevant feedback about traffic and flows. Embodiments of the present disclosure can therefore address the challenges of networks with extreme bandwidth delay products (BWDP).

  8. Airport Network Flow Simulator

    Science.gov (United States)

    1978-10-01

    The Airport Network Flow Simulator is a FORTRAN IV simulation of the flow of air traffic in the nation's 600 commercial airports. It calculates for any group of selected airports: (a) the landing and take-off (Type A) delays; and (b) the gate departu...

  9. Techniques for Modelling Network Security

    OpenAIRE

    Lech Gulbinovič

    2012-01-01

    The article compares modelling techniques for network security, including the theory of probability, Markov processes, Petri networks and application of stochastic activity networks. The paper introduces the advantages and disadvantages of the above proposed methods and accepts the method of modelling the network of stochastic activity as one of the most relevant. The stochastic activity network allows modelling the behaviour of the dynamic system where the theory of probability is inappropri...

  10. Underwater Acoustic Networking Techniques

    CERN Document Server

    Otnes, Roald; Casari, Paolo; Goetz, Michael; Husøy, Thor; Nissen, Ivor; Rimstad, Knut; van Walree, Paul; Zorzi, Michele

    2012-01-01

    This literature study presents an overview of underwater acoustic networking. It provides a background and describes the state of the art of all networking facets that are relevant for underwater applications. This report serves both as an introduction to the subject and as a summary of existing protocols, providing support and inspiration for the development of network architectures.

  11. Airflow Simulation Techniques

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    The paper describes the development in airflow simulations in rooms . The research is, as other areas of flow research, influenced by the decreasing cost of computation which seems to indicate an increased use of airflow simulation in the coming years....

  12. Simulated Associating Polymer Networks

    Science.gov (United States)

    Billen, Joris

    Telechelic associating polymer networks consist of polymer chains terminated by endgroups that have a different chemical composition than the polymer backbone. When dissolved in a solution, the endgroups cluster together to form aggregates. At low temperature, a strongly connected reversible network is formed and the system behaves like a gel. Telechelic networks are of interest since they are representative for biopolymer networks (e.g. F-actin) and are widely used in medical applications (e.g. hydrogels for tissue engineering, wound dressings) and consumer products (e.g. contact lenses, paint thickeners). In this thesis such systems are studied by means of a molecular dynamics/Monte Carlo simulation. At first, the system in rest is studied by means of graph theory. The changes in network topology upon cooling to the gel state, are characterized. Hereto an extensive study of the eigenvalue spectrum of the gel network is performed. As a result, an in-depth investigation of the eigenvalue spectra for spatial ER, scale-free, and small-world networks is carried out. Next, the gel under the application of a constant shear is studied, with a focus on shear banding and the changes in topology under shear. Finally, the relation between the gel transition and percolation is discussed.

  13. GNS3 network simulation guide

    CERN Document Server

    Welsh, Chris

    2013-01-01

    GNS3 Network Simulation Guide is an easy-to-follow yet comprehensive guide which is written in a tutorial format helping you grasp all the things you need for accomplishing your certification or simulation goal. If you are a networking professional who wants to learn how to simulate networks using GNS3, this book is ideal for you. The introductory examples within the book only require minimal networking knowledge, but as the book progresses onto more advanced topics, users will require knowledge of TCP/IP and routing.

  14. Microprocessor Simulation: A Training Technique.

    Science.gov (United States)

    Oscarson, David J.

    1982-01-01

    Describes the design and application of a microprocessor simulation using BASIC for formal training of technicians and managers and as a management tool. Illustrates the utility of the modular approach for the instruction and practice of decision-making techniques. (SK)

  15. Multilevel techniques for Reservoir Simulation

    DEFF Research Database (Denmark)

    Christensen, Max la Cour

    The subject of this thesis is the development, application and study of novel multilevel methods for the acceleration and improvement of reservoir simulation techniques. The motivation for addressing this topic is a need for more accurate predictions of porous media flow and the ability to carry...... based on element-based Algebraic Multigrid (AMGe). In particular, an advanced AMGe technique with guaranteed approximation properties is used to construct a coarse multilevel hierarchy of Raviart-Thomas and L2 spaces for the Galerkin coarsening of a mixed formulation of the reservoir simulation...... equations. By experimentation it is found that the AMGe based upscaling technique provided very accurate results while reducing the computational time proportionally to the reduction in degrees of freedom. Furthermore, it is demonstrated that the AMGe coarse spaces (interpolation operators) can be used...

  16. Efficient simulation of a tandem Jackson network

    NARCIS (Netherlands)

    Kroese, Dirk; Nicola, V.F.

    2002-01-01

    The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds

  17. Emerging wireless networks concepts, techniques and applications

    CERN Document Server

    Makaya, Christian

    2011-01-01

    An authoritative collection of research papers and surveys, Emerging Wireless Networks: Concepts, Techniques, and Applications explores recent developments in next-generation wireless networks (NGWNs) and mobile broadband networks technologies, including 4G (LTE, WiMAX), 3G (UMTS, HSPA), WiFi, mobile ad hoc networks, mesh networks, and wireless sensor networks. Focusing on improving the performance of wireless networks and provisioning better quality of service and quality of experience for users, it reports on the standards of different emerging wireless networks, applications, and service fr

  18. Cognitive optical networks: architectures and techniques

    Science.gov (United States)

    Grebeshkov, Alexander Y.

    2017-04-01

    This article analyzes architectures and techniques of the optical networks with taking into account the cognitive methodology based on continuous cycle "Observe-Orient-Plan-Decide-Act-Learn" and the ability of the cognitive systems adjust itself through an adaptive process by responding to new changes in the environment. Cognitive optical network architecture includes cognitive control layer with knowledge base for control of software-configurable devices as reconfigurable optical add-drop multiplexers, flexible optical transceivers, software-defined receivers. Some techniques for cognitive optical networks as flexible-grid technology, broker-oriented technique, machine learning are examined. Software defined optical network and integration of wireless and optical networks with radio over fiber technique and fiber-wireless technique in the context of cognitive technologies are discussed.

  19. Blockmodeling techniques for complex networks

    Science.gov (United States)

    Ball, Brian Joseph

    The class of network models known as stochastic blockmodels has recently been gaining popularity. In this dissertation, we present new work that uses blockmodels to answer questions about networks. We create a blockmodel based on the idea of link communities, which naturally gives rise to overlapping vertex communities. We derive a fast and accurate algorithm to fit the model to networks. This model can be related to another blockmodel, which allows the method to efficiently find nonoverlapping communities as well. We then create a heuristic based on the link community model whose use is to find the correct number of communities in a network. The heuristic is based on intuitive corrections to likelihood ratio tests. It does a good job finding the correct number of communities in both real networks and synthetic networks generated from the link communities model. Two commonly studied types of networks are citation networks, where research papers cite other papers, and coauthorship networks, where authors are connected if they've written a paper together. We study a multi-modal network from a large dataset of Physics publications that is the combination of the two, allowing for directed links between papers as citations, and an undirected edge between a scientist and a paper if they helped to write it. This allows for new insights on the relation between social interaction and scientific production. We also have the publication dates of papers, which lets us track our measures over time. Finally, we create a stochastic model for ranking vertices in a semi-directed network. The probability of connection between two vertices depends on the difference of their ranks. When this model is fit to high school friendship networks, the ranks appear to correspond with a measure of social status. Students have reciprocated and some unreciprocated edges with other students of closely similar rank that correspond to true friendship, and claim an aspirational friendship with a much

  20. Techniques for Binary Black Hole Simulations

    Science.gov (United States)

    Baker, John G.

    2006-01-01

    Recent advances in techniques for numerical simulation of black hole systems have enabled dramatic progress in astrophysical applications. Our approach to these simulations, which includes new gauge conditions for moving punctures, AMR, and specific tools for analyzing black hole simulations, has been applied to a variety of black hole configurations, typically resulting in simulations lasting several orbits. I will discuss these techniques, what we've learned in applications, and outline some areas for further development.

  1. Developed hydraulic simulation model for water pipeline networks

    Directory of Open Access Journals (Sweden)

    A. Ayad

    2013-03-01

    Full Text Available A numerical method that uses linear graph theory is presented for both steady state, and extended period simulation in a pipe network including its hydraulic components (pumps, valves, junctions, etc.. The developed model is based on the Extended Linear Graph Theory (ELGT technique. This technique is modified to include new network components such as flow control valves and tanks. The technique also expanded for extended period simulation (EPS. A newly modified method for the calculation of updated flows improving the convergence rate is being introduced. Both benchmarks, ad Actual networks are analyzed to check the reliability of the proposed method. The results reveal the finer performance of the proposed method.

  2. Real-time network traffic classification technique for wireless local area networks based on compressed sensing

    Science.gov (United States)

    Balouchestani, Mohammadreza

    2017-05-01

    Network traffic or data traffic in a Wireless Local Area Network (WLAN) is the amount of network packets moving across a wireless network from each wireless node to another wireless node, which provide the load of sampling in a wireless network. WLAN's Network traffic is the main component for network traffic measurement, network traffic control and simulation. Traffic classification technique is an essential tool for improving the Quality of Service (QoS) in different wireless networks in the complex applications such as local area networks, wireless local area networks, wireless personal area networks, wireless metropolitan area networks, and wide area networks. Network traffic classification is also an essential component in the products for QoS control in different wireless network systems and applications. Classifying network traffic in a WLAN allows to see what kinds of traffic we have in each part of the network, organize the various kinds of network traffic in each path into different classes in each path, and generate network traffic matrix in order to Identify and organize network traffic which is an important key for improving the QoS feature. To achieve effective network traffic classification, Real-time Network Traffic Classification (RNTC) algorithm for WLANs based on Compressed Sensing (CS) is presented in this paper. The fundamental goal of this algorithm is to solve difficult wireless network management problems. The proposed architecture allows reducing False Detection Rate (FDR) to 25% and Packet Delay (PD) to 15 %. The proposed architecture is also increased 10 % accuracy of wireless transmission, which provides a good background for establishing high quality wireless local area networks.

  3. Simulation of developing human neuronal cell networks.

    Science.gov (United States)

    Lenk, Kerstin; Priwitzer, Barbara; Ylä-Outinen, Laura; Tietz, Lukas H B; Narkilahti, Susanna; Hyttinen, Jari A K

    2016-08-30

    Microelectrode array (MEA) is a widely used technique to study for example the functional properties of neuronal networks derived from human embryonic stem cells (hESC-NN). With hESC-NN, we can investigate the earliest developmental stages of neuronal network formation in the human brain. In this paper, we propose an in silico model of maturating hESC-NNs based on a phenomenological model called INEX. We focus on simulations of the development of bursts in hESC-NNs, which are the main feature of neuronal activation patterns. The model was developed with data from developing hESC-NN recordings on MEAs which showed increase in the neuronal activity during the investigated six measurement time points in the experimental and simulated data. Our simulations suggest that the maturation process of hESC-NN, resulting in the formation of bursts, can be explained by the development of synapses. Moreover, spike and burst rate both decreased at the last measurement time point suggesting a pruning of synapses as the weak ones are removed. To conclude, our model reflects the assumption that the interaction between excitatory and inhibitory neurons during the maturation of a neuronal network and the spontaneous emergence of bursts are due to increased connectivity caused by the forming of new synapses.

  4. Introduction to Network Simulator NS2

    CERN Document Server

    Issariyakul, Teerawat

    2008-01-01

    A beginners' guide for network simulator NS2, an open-source discrete event simulator designed mainly for networking research. It presents two fundamental NS2 concepts: how objects are assembled to create a network and how a packet flows from one object to another

  5. Trace Replay and Network Simulation Tool

    Energy Technology Data Exchange (ETDEWEB)

    2017-09-22

    TraceR Is a trace replay tool built upon the ROSS-based CODES simulation framework. TraceR can be used for predicting network performance and understanding network behavior by simulating messaging In High Performance Computing applications on interconnection networks.

  6. WDM Systems and Networks Modeling, Simulation, Design and Engineering

    CERN Document Server

    Ellinas, Georgios; Roudas, Ioannis

    2012-01-01

    WDM Systems and Networks: Modeling, Simulation, Design and Engineering provides readers with the basic skills, concepts, and design techniques used to begin design and engineering of optical communication systems and networks at various layers. The latest semi-analytical system simulation techniques are applied to optical WDM systems and networks, and a review of the various current areas of optical communications is presented. Simulation is mixed with experimental verification and engineering to present the industry as well as state-of-the-art research. This contributed volume is divided into three parts, accommodating different readers interested in various types of networks and applications. The first part of the book presents modeling approaches and simulation tools mainly for the physical layer including transmission effects, devices, subsystems, and systems), whereas the second part features more engineering/design issues for various types of optical systems including ULH, access, and in-building system...

  7. Simulation-based optimization parametric optimization techniques and reinforcement learning

    CERN Document Server

    Gosavi, Abhijit

    2003-01-01

    Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization. The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work. Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are: *An accessible introduction to reinforcement learning and parametric-optimization techniques. *A step-by-step description of several algorithms of simulation-based optimization. *A clear and simple introduction to the methodology of neural networks. *A gentle introduction to converg...

  8. Performance Monitoring Techniques Supporting Cognitive Optical Networking

    DEFF Research Database (Denmark)

    Caballero Jambrina, Antonio; Borkowski, Robert; Zibar, Darko

    2013-01-01

    to solve this issue by realizing a network that can observe, act, learn and optimize its performance, taking into account end-to-end goals. In this letter we present the approach of cognition applied to heterogeneous optical networks developed in the framework of the EU project CHRON: Cognitive...... Heterogeneous Reconfigurable Optical Network. We focus on the approaches developed in the project for optical performance monitoring, which enable the feedback from the physical layer to the cognitive decision system by providing accurate description of the performance of the established lightpaths.......High degree of heterogeneity of future optical networks, such as services with different quality-of-transmission requirements, modulation formats and switching techniques, will pose a challenge for the control and optimization of different parameters. Incorporation of cognitive techniques can help...

  9. SDL-based network performance simulation

    Science.gov (United States)

    Yang, Yang; Lu, Yang; Lin, Xiaokang

    2005-11-01

    Specification and description language (SDL) is an object-oriented formal language defined as a standard by ITU-T. Though SDL is mainly used in describing communication protocols, it is an efficient way to simulate the network performance with SDL tools according to our experience. This paper presents our methodology of SDL-based network performance simulation in such aspects as the simulation platform, the simulation modes and the integrated simulation environment. Note that Telelogic Tau 4.3 SDL suite is used here as the simulation environment though our methodology isn't limited to the software. Finally the SDL-based open shortest path first (OSPF) performance simulation in the wireless private network is illustrated as an example of our methodology, which indicates that SDL is indeed an efficient language in the area of the network performance simulation.

  10. Spiking network simulation code for petascale computers

    Science.gov (United States)

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682

  11. Spiking network simulation code for petascale computers.

    Science.gov (United States)

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M; Plesser, Hans E; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.

  12. Introduction to Network Simulator NS2

    CERN Document Server

    Issariyakul, Teerawat

    2012-01-01

    "Introduction to Network Simulator NS2" is a primer providing materials for NS2 beginners, whether students, professors, or researchers for understanding the architecture of Network Simulator 2 (NS2) and for incorporating simulation modules into NS2. The authors discuss the simulation architecture and the key components of NS2 including simulation-related objects, network objects, packet-related objects, and helper objects. The NS2 modules included within are nodes, links, SimpleLink objects, packets, agents, and applications. Further, the book covers three helper modules: timers, ra

  13. Retinal Image Simulation of Subjective Refraction Techniques.

    Science.gov (United States)

    Perches, Sara; Collados, M Victoria; Ares, Jorge

    2016-01-01

    Refraction techniques make it possible to determine the most appropriate sphero-cylindrical lens prescription to achieve the best possible visual quality. Among these techniques, subjective refraction (i.e., patient's response-guided refraction) is the most commonly used approach. In this context, this paper's main goal is to present a simulation software that implements in a virtual manner various subjective-refraction techniques--including Jackson's Cross-Cylinder test (JCC)--relying all on the observation of computer-generated retinal images. This software has also been used to evaluate visual quality when the JCC test is performed in multifocal-contact-lens wearers. The results reveal this software's usefulness to simulate the retinal image quality that a particular visual compensation provides. Moreover, it can help to gain a deeper insight and to improve existing refraction techniques and it can be used for simulated training.

  14. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo

    2015-09-15

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.

  15. A neural network simulation package in CLIPS

    Science.gov (United States)

    Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John

    1990-01-01

    The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.

  16. Hierarchical Network Design Using Simulated Annealing

    DEFF Research Database (Denmark)

    Thomadsen, Tommy; Clausen, Jens

    2002-01-01

    networks are described and a mathematical model is proposed for a two level version of the hierarchical network problem. The problem is to determine which edges should connect nodes, and how demand is routed in the network. The problem is solved heuristically using simulated annealing which as a sub...

  17. Vectorized algorithms for spiking neural network simulation.

    Science.gov (United States)

    Brette, Romain; Goodman, Dan F M

    2011-06-01

    High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.

  18. A user oriented active network simulator

    Science.gov (United States)

    Rao, K. S.; Swamy, M. N. S.

    1980-07-01

    A digital computer simulator for the frequency response and tolerance analysis of an electrical network comprising RLCM elements, ideal operational amplifiers and controlled sources is presented in this tutorial paper. The simulator is based on 'tableau approach'. Reordering of the sparse tableau matrix is done using Markowitz Criterion and the diagonal pivots are chosen for simplicity. The simulator also employs dynamic allocation for maximum utilization of memory and faster turn around time. Three networks are simulated and their results are presented in this paper. A network in which the operational amplifiers are assumed to have single pole behaviour is also analyzed.

  19. Enhanced sampling techniques in biomolecular simulations.

    Science.gov (United States)

    Spiwok, Vojtech; Sucur, Zoran; Hosek, Petr

    2015-11-01

    Biomolecular simulations are routinely used in biochemistry and molecular biology research; however, they often fail to match expectations of their impact on pharmaceutical and biotech industry. This is caused by the fact that a vast amount of computer time is required to simulate short episodes from the life of biomolecules. Several approaches have been developed to overcome this obstacle, including application of massively parallel and special purpose computers or non-conventional hardware. Methodological approaches are represented by coarse-grained models and enhanced sampling techniques. These techniques can show how the studied system behaves in long time-scales on the basis of relatively short simulations. This review presents an overview of new simulation approaches, the theory behind enhanced sampling methods and success stories of their applications with a direct impact on biotechnology or drug design. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Pedestrian flow simulation validation and verification techniques

    OpenAIRE

    Dridi, Mohamed H.

    2015-01-01

    For the verification and validation of microscopic simulation models of pedestrian flow, we have performed experiments for different kind of facilities and sites where most conflicts and congestion happens e.g. corridors, narrow passages, and crosswalks. The validity of the model should compare the experimental conditions and simulation results with video recording carried out in the same condition like in real life e.g. pedestrian flux and density distributions. The strategy in this techniqu...

  1. Program Aids Simulation Of Neural Networks

    Science.gov (United States)

    Baffes, Paul T.

    1990-01-01

    Computer program NETS - Tool for Development and Evaluation of Neural Networks - provides simulation of neural-network algorithms plus software environment for development of such algorithms. Enables user to customize patterns of connections between layers of network, and provides features for saving weight values of network, providing for more precise control over learning process. Consists of translating problem into format using input/output pairs, designing network configuration for problem, and finally training network with input/output pairs until acceptable error reached. Written in C.

  2. Traffic simulations on parallel computers using domain decomposition techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hanebutte, U.R.; Tentner, A.M.

    1995-12-31

    Large scale simulations of Intelligent Transportation Systems (ITS) can only be achieved by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic simulations with the standard simulation package TRAF-NETSIM on a 128 nodes IBM SPx parallel supercomputer as well as on a cluster of SUN workstations. Whilst this particular parallel implementation is based on NETSIM, a microscopic traffic simulation model, the presented strategy is applicable to a broad class of traffic simulations. An outer iteration loop must be introduced in order to converge to a global solution. A performance study that utilizes a scalable test network that consist of square-grids is presented, which addresses the performance penalty introduced by the additional iteration loop.

  3. Modified network simulation model with token method of bus access

    Directory of Open Access Journals (Sweden)

    L.V. Stribulevich

    2013-08-01

    Full Text Available Purpose. To study the characteristics of the local network with the marker method of access to the bus its modified simulation model was developed. Methodology. Defining characteristics of the network is carried out on the developed simulation model, which is based on the state diagram-layer network station with the mechanism of processing priorities, both in steady state and in the performance of control procedures: the initiation of a logical ring, the entrance and exit of the station network with a logical ring. Findings. A simulation model, on the basis of which can be obtained the dependencies of the application the maximum waiting time in the queue for different classes of access, and the reaction time usable bandwidth on the data rate, the number of network stations, the generation rate applications, the number of frames transmitted per token holding time, frame length was developed. Originality. The technique of network simulation reflecting its work in the steady condition and during the control procedures, the mechanism of priority ranking and handling was proposed. Practical value. Defining network characteristics in the real-time systems on railway transport based on the developed simulation model.

  4. Pedestrian Flow Simulation Validation and Verification Techniques

    CERN Document Server

    Dridi, Mohamed H

    2014-01-01

    For the verification and validation of microscopic simulation models of pedestrian flow, we have performed experiments for different kind of facilities and sites where most conflicts and congestion happens e.g. corridors, narrow passages, and crosswalks. The validity of the model should compare the experimental conditions and simulation results with video recording carried out in the same condition like in real life e.g. pedestrian flux and density distributions. The strategy in this technique is to achieve a certain amount of accuracy required in the simulation model. This method is good at detecting the critical points in the pedestrians walking areas. For the calibration of suitable models we use the results obtained from analyzing the video recordings in Hajj 2009 and these results can be used to check the design sections of pedestrian facilities and exits. As practical examples, we present the simulation of pilgrim streams on the Jamarat bridge. The objectives of this study are twofold: first, to show th...

  5. Cochlear implant simulator for surgical technique analysis

    Science.gov (United States)

    Turok, Rebecca L.; Labadie, Robert F.; Wanna, George B.; Dawant, Benoit M.; Noble, Jack H.

    2014-03-01

    Cochlear Implant (CI) surgery is a procedure in which an electrode array is inserted into the cochlea. The electrode array is used to stimulate auditory nerve fibers and restore hearing for people with severe to profound hearing loss. The primary goals when placing the electrode array are to fully insert the array into the cochlea while minimizing trauma to the cochlea. Studying the relationship between surgical outcome and various surgical techniques has been difficult since trauma and electrode placement are generally unknown without histology. Our group has created a CI placement simulator that combines an interactive 3D visualization environment with a haptic-feedback-enabled controller. Surgical techniques and patient anatomy can be varied between simulations so that outcomes can be studied under varied conditions. With this system, we envision that through numerous trials we will be able to statistically analyze how outcomes relate to surgical techniques. As a first test of this system, in this work, we have designed an experiment in which we compare the spatial distribution of forces imparted to the cochlea in the array insertion procedure when using two different but commonly used surgical techniques for cochlear access, called round window and cochleostomy access. Our results suggest that CIs implanted using round window access may cause less trauma to deeper intracochlear structures than cochleostomy techniques. This result is of interest because it challenges traditional thinking in the otological community but might offer an explanation for recent anecdotal evidence that suggests that round window access techniques lead to better outcomes.

  6. Improving a Computer Networks Course Using the Partov Simulation Engine

    Science.gov (United States)

    Momeni, B.; Kharrazi, M.

    2012-01-01

    Computer networks courses are hard to teach as there are many details in the protocols and techniques involved that are difficult to grasp. Employing programming assignments as part of the course helps students to obtain a better understanding and gain further insight into the theoretical lectures. In this paper, the Partov simulation engine and…

  7. Wireless Sensor Networks Formation: Approaches and Techniques

    Directory of Open Access Journals (Sweden)

    Miriam Carlos-Mancilla

    2016-01-01

    Full Text Available Nowadays, wireless sensor networks (WSNs emerge as an active research area in which challenging topics involve energy consumption, routing algorithms, selection of sensors location according to a given premise, robustness, efficiency, and so forth. Despite the open problems in WSNs, there are already a high number of applications available. In all cases for the design of any application, one of the main objectives is to keep the WSN alive and functional as long as possible. A key factor in this is the way the network is formed. This survey presents most recent formation techniques and mechanisms for the WSNs. In this paper, the reviewed works are classified into distributed and centralized techniques. The analysis is focused on whether a single or multiple sinks are employed, nodes are static or mobile, the formation is event detection based or not, and network backbone is formed or not. We focus on recent works and present a discussion of their advantages and drawbacks. Finally, the paper overviews a series of open issues which drive further research in the area.

  8. Fast simulation techniques for switching converters

    Science.gov (United States)

    King, Roger J.

    1987-01-01

    Techniques for simulating a switching converter are examined. The state equations for the equivalent circuits, which represent the switching converter, are presented and explained. The uses of the Newton-Raphson iteration, low ripple approximation, half-cycle symmetry, and discrete time equations to compute the interval durations are described. An example is presented in which these methods are illustrated by applying them to a parallel-loaded resonant inverter with three equivalent circuits for its continuous mode of operation.

  9. Buffer Management Simulation in ATM Networks

    Science.gov (United States)

    Yaprak, E.; Xiao, Y.; Chronopoulos, A.; Chow, E.; Anneberg, L.

    1998-01-01

    This paper presents a simulation of a new dynamic buffer allocation management scheme in ATM networks. To achieve this objective, an algorithm that detects congestion and updates the dynamic buffer allocation scheme was developed for the OPNET simulation package via the creation of a new ATM module.

  10. SiGNet: A signaling network data simulator to enable signaling network inference.

    Directory of Open Access Journals (Sweden)

    Elizabeth A Coker

    Full Text Available Network models are widely used to describe complex signaling systems. Cellular wiring varies in different cellular contexts and numerous inference techniques have been developed to infer the structure of a network from experimental data of the network's behavior. To objectively identify which inference strategy is best suited to a specific network, a gold standard network and dataset are required. However, suitable datasets for benchmarking are difficult to find. Numerous tools exist that can simulate data for transcriptional networks, but these are of limited use for the study of signaling networks. Here, we describe SiGNet (Signal Generator for Networks: a Cytoscape app that simulates experimental data for a signaling network of known structure. SiGNet has been developed and tested against published experimental data, incorporating information on network architecture, and the directionality and strength of interactions to create biological data in silico. SiGNet is the first tool to simulate biological signaling data, enabling an accurate and systematic assessment of inference strategies. SiGNet can also be used to produce preliminary models of key biological pathways following perturbation.

  11. Power Minimization techniques for Networked Data Centers.

    Energy Technology Data Exchange (ETDEWEB)

    Low, Steven; Tang, Kevin

    2011-09-28

    Our objective is to develop a mathematical model to optimize energy consumption at multiple levels in networked data centers, and develop abstract algorithms to optimize not only individual servers, but also coordinate the energy consumption of clusters of servers within a data center and across geographically distributed data centers to minimize the overall energy cost and consumption of brown energy of an enterprise. In this project, we have formulated a variety of optimization models, some stochastic others deterministic, and have obtained a variety of qualitative results on the structural properties, robustness, and scalability of the optimal policies. We have also systematically derived from these models decentralized algorithms to optimize energy efficiency, analyzed their optimality and stability properties. Finally, we have conducted preliminary numerical simulations to illustrate the behavior of these algorithms. We draw the following conclusion. First, there is a substantial opportunity to minimize both the amount and the cost of electricity consumption in a network of datacenters, by exploiting the fact that traffic load, electricity cost, and availability of renewable generation fluctuate over time and across geographical locations. Judiciously matching these stochastic processes can optimize the tradeoff between brown energy consumption, electricity cost, and response time. Second, given the stochastic nature of these three processes, real-time dynamic feedback should form the core of any optimization strategy. The key is to develop decentralized algorithms that can be implemented at different parts of the network as simple, local algorithms that coordinate through asynchronous message passing.

  12. Power Aware Simulation Framework for Wireless Sensor Networks and Nodes

    Directory of Open Access Journals (Sweden)

    Daniel Weber

    2008-07-01

    Full Text Available The constrained resources of sensor nodes limit analytical techniques and cost-time factors limit test beds to study wireless sensor networks (WSNs. Consequently, simulation becomes an essential tool to evaluate such systems.We present the power aware wireless sensors (PAWiS simulation framework that supports design and simulation of wireless sensor networks and nodes. The framework emphasizes power consumption capturing and hence the identification of inefficiencies in various hardware and software modules of the systems. These modules include all layers of the communication system, the targeted class of application itself, the power supply and energy management, the central processing unit (CPU, and the sensor-actuator interface. The modular design makes it possible to simulate heterogeneous systems. PAWiS is an OMNeT++ based discrete event simulator written in C++. It captures the node internals (modules as well as the node surroundings (network, environment and provides specific features critical to WSNs like capturing power consumption at various levels of granularity, support for mobility, and environmental dynamics as well as the simulation of timing effects. A module library with standardized interfaces and a power analysis tool have been developed to support the design and analysis of simulation models. The performance of the PAWiS simulator is comparable with other simulation environments.

  13. Network simulations of optical illusions

    Science.gov (United States)

    Shinbrot, Troy; Lazo, Miguel Vivar; Siu, Theo

    We examine a dynamical network model of visual processing that reproduces several aspects of a well-known optical illusion, including subtle dependencies on curvature and scale. The model uses a genetic algorithm to construct the percept of an image, and we show that this percept evolves dynamically so as to produce the illusions reported. We find that the perceived illusions are hardwired into the model architecture and we propose that this approach may serve as an archetype to distinguish behaviors that are due to nature (i.e. a fixed network architecture) from those subject to nurture (that can be plastically altered through learning).

  14. Network Simulation of Technical Architecture

    National Research Council Canada - National Science Library

    Cave, William

    1998-01-01

    ..., and development of the Army Battle Command System (ABCS). PSI delivered a hierarchical iconic modeling facility that can be used to structure and restructure both models and scenarios, interactively, while simulations are running...

  15. Code generation: a strategy for neural network simulators.

    Science.gov (United States)

    Goodman, Dan F M

    2010-10-01

    We demonstrate a technique for the design of neural network simulation software, runtime code generation. This technique can be used to give the user complete flexibility in specifying the mathematical model for their simulation in a high level way, along with the speed of code written in a low level language such as C+ +. It can also be used to write code only once but target different hardware platforms, including inexpensive high performance graphics processing units (GPUs). Code generation can be naturally combined with computer algebra systems to provide further simplification and optimisation of the generated code. The technique is quite general and could be applied to any simulation package. We demonstrate it with the 'Brian' simulator ( http://www.briansimulator.org ).

  16. Meeting the memory challenges of brain-scale network simulation

    Directory of Open Access Journals (Sweden)

    Susanne eKunkel

    2012-01-01

    Full Text Available The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 10^5 neurons with up to 10^9 synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are one or two orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been studied in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Bluegene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of a neuronal simulator as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place.

  17. Implementation of quantum key distribution network simulation module in the network simulator NS-3

    Science.gov (United States)

    Mehic, Miralem; Maurhart, Oliver; Rass, Stefan; Voznak, Miroslav

    2017-10-01

    As the research in quantum key distribution (QKD) technology grows larger and becomes more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. Due to the specificity of the QKD link which requires optical and Internet connection between the network nodes, to deploy a complete testbed containing multiple network hosts and links to validate and verify a certain network algorithm or protocol would be very costly. Network simulators in these circumstances save vast amounts of money and time in accomplishing such a task. The simulation environment offers the creation of complex network topologies, a high degree of control and repeatable experiments, which in turn allows researchers to conduct experiments and confirm their results. In this paper, we described the design of the QKD network simulation module which was developed in the network simulator of version 3 (NS-3). The module supports simulation of the QKD network in an overlay mode or in a single TCP/IP mode. Therefore, it can be used to simulate other network technologies regardless of QKD.

  18. The Airport Network Flow Simulator.

    Science.gov (United States)

    1976-05-01

    The impact of investment at an individual airport is felt through-out the National Airport System by reduction of delays at other airports in the the system. A GPSS model was constructed to simulate the propagation of delays through a nine-airport sy...

  19. Synthesis of recurrent neural networks for dynamical system simulation.

    Science.gov (United States)

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Dynamic simulation of regulatory networks using SQUAD

    Directory of Open Access Journals (Sweden)

    Xenarios Ioannis

    2007-11-01

    Full Text Available Abstract Background The ambition of most molecular biologists is the understanding of the intricate network of molecular interactions that control biological systems. As scientists uncover the components and the connectivity of these networks, it becomes possible to study their dynamical behavior as a whole and discover what is the specific role of each of their components. Since the behavior of a network is by no means intuitive, it becomes necessary to use computational models to understand its behavior and to be able to make predictions about it. Unfortunately, most current computational models describe small networks due to the scarcity of kinetic data available. To overcome this problem, we previously published a methodology to convert a signaling network into a dynamical system, even in the total absence of kinetic information. In this paper we present a software implementation of such methodology. Results We developed SQUAD, a software for the dynamic simulation of signaling networks using the standardized qualitative dynamical systems approach. SQUAD converts the network into a discrete dynamical system, and it uses a binary decision diagram algorithm to identify all the steady states of the system. Then, the software creates a continuous dynamical system and localizes its steady states which are located near the steady states of the discrete system. The software permits to make simulations on the continuous system, allowing for the modification of several parameters. Importantly, SQUAD includes a framework for perturbing networks in a manner similar to what is performed in experimental laboratory protocols, for example by activating receptors or knocking out molecular components. Using this software we have been able to successfully reproduce the behavior of the regulatory network implicated in T-helper cell differentiation. Conclusion The simulation of regulatory networks aims at predicting the behavior of a whole system when subject

  1. Ekofisk chalk: core measurements, stochastic reconstruction, network modeling and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, Saifullah

    2002-07-01

    This dissertation deals with (1) experimental measurements on petrophysical, reservoir engineering and morphological properties of Ekofisk chalk, (2) numerical simulation of core flood experiments to analyze and improve relative permeability data, (3) stochastic reconstruction of chalk samples from limited morphological information, (4) extraction of pore space parameters from the reconstructed samples, development of network model using pore space information, and computation of petrophysical and reservoir engineering properties from network model, and (5) development of 2D and 3D idealized fractured reservoir models and verification of the applicability of several widely used conventional up scaling techniques in fractured reservoir simulation. Experiments have been conducted on eight Ekofisk chalk samples and porosity, absolute permeability, formation factor, and oil-water relative permeability, capillary pressure and resistivity index are measured at laboratory conditions. Mercury porosimetry data and backscatter scanning electron microscope images have also been acquired for the samples. A numerical simulation technique involving history matching of the production profiles is employed to improve the relative permeability curves and to analyze hysteresis of the Ekofisk chalk samples. The technique was found to be a powerful tool to supplement the uncertainties in experimental measurements. Porosity and correlation statistics obtained from backscatter scanning electron microscope images are used to reconstruct microstructures of chalk and particulate media. The reconstruction technique involves a simulated annealing algorithm, which can be constrained by an arbitrary number of morphological parameters. This flexibility of the algorithm is exploited to successfully reconstruct particulate media and chalk samples using more than one correlation functions. A technique based on conditional simulated annealing has been introduced for exact reproduction of vuggy

  2. An RSS based location estimation technique for cognitive relay networks

    KAUST Repository

    Qaraqe, Khalid A.

    2010-11-01

    In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine the location of the source using the direct and the relayed signal at the destination. We derive the Cramer-Rao lower bound (CRLB) expressions separately for x and y coordinates of the location estimate. We analyze the effects of cognitive behaviour of the relay on the performance of the proposed method. We also discuss and quantify the reliability of the location estimate using the proposed technique if the source is not stationary. The overall performance of the proposed method is presented through simulations. ©2010 IEEE.

  3. Techniques Used in String Matching for Network Security

    OpenAIRE

    Jamuna Bhandari

    2014-01-01

    String matching also known as pattern matching is one of primary concept for network security. In this area the effectiveness and efficiency of string matching algorithms is important for applications in network security such as network intrusion detection, virus detection, signature matching and web content filtering system. This paper presents brief review on some of string matching techniques used for network security.

  4. High Fidelity Simulations of Large-Scale Wireless Networks

    Energy Technology Data Exchange (ETDEWEB)

    Onunkwo, Uzoma [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Benz, Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  5. Parameter estimation in channel network flow simulation

    Directory of Open Access Journals (Sweden)

    Han Longxi

    2008-03-01

    Full Text Available Simulations of water flow in channel networks require estimated values of roughness for all the individual channel segments that make up a network. When the number of individual channel segments is large, the parameter calibration workload is substantial and a high level of uncertainty in estimated roughness cannot be avoided. In this study, all the individual channel segments are graded according to the factors determining the value of roughness. It is assumed that channel segments with the same grade have the same value of roughness. Based on observed hydrological data, an optimal model for roughness estimation is built. The procedure of solving the optimal problem using the optimal model is described. In a test of its efficacy, this estimation method was applied successfully in the simulation of tidal water flow in a large complicated channel network in the lower reach of the Yangtze River in China.

  6. Simulation of Stimuli-Responsive Polymer Networks

    Directory of Open Access Journals (Sweden)

    Thomas Gruhn

    2013-11-01

    Full Text Available The structure and material properties of polymer networks can depend sensitively on changes in the environment. There is a great deal of progress in the development of stimuli-responsive hydrogels for applications like sensors, self-repairing materials or actuators. Biocompatible, smart hydrogels can be used for applications, such as controlled drug delivery and release, or for artificial muscles. Numerical studies have been performed on different length scales and levels of details. Macroscopic theories that describe the network systems with the help of continuous fields are suited to study effects like the stimuli-induced deformation of hydrogels on large scales. In this article, we discuss various macroscopic approaches and describe, in more detail, our phase field model, which allows the calculation of the hydrogel dynamics with the help of a free energy that considers physical and chemical impacts. On a mesoscopic level, polymer systems can be modeled with the help of the self-consistent field theory, which includes the interactions, connectivity, and the entropy of the polymer chains, and does not depend on constitutive equations. We present our recent extension of the method that allows the study of the formation of nano domains in reversibly crosslinked block copolymer networks. Molecular simulations of polymer networks allow the investigation of the behavior of specific systems on a microscopic scale. As an example for microscopic modeling of stimuli sensitive polymer networks, we present our Monte Carlo simulations of a filament network system with crosslinkers.

  7. Realistic computer network simulation for network intrusion detection dataset generation

    Science.gov (United States)

    Payer, Garrett

    2015-05-01

    The KDD-99 Cup dataset is dead. While it can continue to be used as a toy example, the age of this dataset makes it all but useless for intrusion detection research and data mining. Many of the attacks used within the dataset are obsolete and do not reflect the features important for intrusion detection in today's networks. Creating a new dataset encompassing a large cross section of the attacks found on the Internet today could be useful, but would eventually fall to the same problem as the KDD-99 Cup; its usefulness would diminish after a period of time. To continue research into intrusion detection, the generation of new datasets needs to be as dynamic and as quick as the attacker. Simply examining existing network traffic and using domain experts such as intrusion analysts to label traffic is inefficient, expensive, and not scalable. The only viable methodology is simulation using technologies including virtualization, attack-toolsets such as Metasploit and Armitage, and sophisticated emulation of threat and user behavior. Simulating actual user behavior and network intrusion events dynamically not only allows researchers to vary scenarios quickly, but enables online testing of intrusion detection mechanisms by interacting with data as it is generated. As new threat behaviors are identified, they can be added to the simulation to make quicker determinations as to the effectiveness of existing and ongoing network intrusion technology, methodology and models.

  8. Knapsack - TOPSIS Technique for Vertical Handover in Heterogeneous Wireless Network

    OpenAIRE

    Malathy, E. M.; Vijayalakshmi Muthuswamy

    2015-01-01

    In a heterogeneous wireless network, handover techniques are designed to facilitate anywhere/anytime service continuity for mobile users. Consistent best-possible access to a network with widely varying network characteristics requires seamless mobility management techniques. Hence, the vertical handover process imposes important technical challenges. Handover decisions are triggered for continuous connectivity of mobile terminals. However, bad network selection and overload conditions in the...

  9. Brian: a simulator for spiking neural networks in Python

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2008-11-01

    Full Text Available Brian is a new simulator for spiking neural networks, written in Python (http://brian.di.ens.fr. It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.

  10. Simulating Autonomous Telecommunication Networks for Space Exploration

    Science.gov (United States)

    Segui, John S.; Jennings, Esther H.

    2008-01-01

    Currently, most interplanetary telecommunication systems require human intervention for command and control. However, considering the range from near Earth to deep space missions, combined with the increase in the number of nodes and advancements in processing capabilities, the benefits from communication autonomy will be immense. Likewise, greater mission science autonomy brings the need for unscheduled, unpredictable communication and network routing. While the terrestrial Internet protocols are highly developed their suitability for space exploration has been questioned. JPL has developed the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) tool to help characterize network designs and protocols. The results will allow future mission planners to better understand the trade offs of communication protocols. This paper discusses various issues with interplanetary network and simulation results of interplanetary networking protocols.

  11. Computer simulation, nuclear techniques and surface analysis

    Directory of Open Access Journals (Sweden)

    Reis, A. D.

    2010-02-01

    Full Text Available This article is about computer simulation and surface analysis by nuclear techniques, which are non-destructive. The “energy method of analysis” for nuclear reactions is used. Energy spectra are computer simulated and compared with experimental data, giving target composition and concentration profile information. Details of prediction stages are given for thick flat target yields. Predictions are made for non-flat targets having asymmetric triangular surface contours. The method is successfully applied to depth profiling of 12C and 18O nuclei in thick targets, by deuteron (d,p and proton (p,α induced reactions, respectively.

    Este artículo trata de simulación por ordenador y del análisis de superficies mediante técnicas nucleares, que son no destructivas. Se usa el “método de análisis en energía” para reacciones nucleares. Se simulan en ordenador espectros en energía que se comparan con datos experimentales, de lo que resulta la obtención de información sobre la composición y los perfiles de concentración de la muestra. Se dan detalles de las etapas de las predicciones de espectros para muestras espesas y planas. Se hacen predicciones para muestras no planas que tienen contornos superficiales triangulares asimétricos. Este método se aplica con éxito en el cálculo de perfiles en profundidad de núcleos de 12C y de 18O en muestras espesas a través de reacciones (d,p y (p,α inducidas por deuterones y protones, respectivamente.

  12. Modeling and Simulation Network Data Standards

    Science.gov (United States)

    2011-09-30

    12.1 Open Shortest Path First ( OSPF ) Protocol commonly used to find the shortest path between two nodes. User defined. 12.2 Border Gateway Protocol...Element Definition 12.7 Request for Comments – 1256 (RFC-1256) Router discovery protocol. 13.0 OSPF Sub-elements define OSPF parameters 13.1...resolution network analysis simulation tool OSPF open shortest path first OV operational view PEO-I Program Executive Office - Information

  13. Expansion techniques for collisionless stellar dynamical simulations

    Energy Technology Data Exchange (ETDEWEB)

    Meiron, Yohai [Kavli Institute for Astronomy and Astrophysics at Peking University, Beijing 100871 (China); Li, Baile; Holley-Bockelmann, Kelly [Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235 (United States); Spurzem, Rainer, E-mail: ymeiron@pku.edu.cn [National Astronomical Observatories of China, Chinese Academy of Sciences, Beijing 100012 (China)

    2014-09-10

    We present graphics processing unit (GPU) implementations of two fast force calculation methods based on series expansions of the Poisson equation. One method is the self-consistent field (SCF) method, which is a Fourier-like expansion of the density field in some basis set; the other method is the multipole expansion (MEX) method, which is a Taylor-like expansion of the Green's function. MEX, which has been advocated in the past, has not gained as much popularity as SCF. Both are particle-field methods and optimized for collisionless galactic dynamics, but while SCF is a 'pure' expansion, MEX is an expansion in just the angular part; thus, MEX is capable of capturing radial structure easily, while SCF needs a large number of radial terms. We show that despite the expansion bias, these methods are more accurate than direct techniques for the same number of particles. The performance of our GPU code, which we call ETICS, is profiled and compared to a CPU implementation. On the tested GPU hardware, a full force calculation for one million particles took ∼0.1 s (depending on expansion cutoff), making simulations with as many as 10{sup 8} particles fast for a comparatively small number of nodes.

  14. Resilience Simulation for Water, Power & Road Networks

    Science.gov (United States)

    Clark, S. S.; Seager, T. P.; Chester, M.; Eisenberg, D. A.; Sweet, D.; Linkov, I.

    2014-12-01

    The increasing frequency, scale, and damages associated with recent catastrophic events has called for a shift in focus from evading losses through risk analysis to improving threat preparation, planning, absorption, recovery, and adaptation through resilience. However, neither underlying theory nor analytic tools have kept pace with resilience rhetoric. As a consequence, current approaches to engineering resilience analysis often conflate resilience and robustness or collapse into a deeper commitment to the risk analytic paradigm proven problematic in the first place. This research seeks a generalizable understanding of resilience that is applicable in multiple disciplinary contexts. We adopt a unique investigative perspective by coupling social and technical analysis with human subjects research to discover the adaptive actions, ideas and decisions that contribute to resilience in three socio-technical infrastructure systems: electric power, water, and roadways. Our research integrates physical models representing network objects with examination of the knowledge systems and social interactions revealed by human subjects making decisions in a simulated crisis environment. To ensure a diversity of contexts, we model electric power, water, roadway and knowledge networks for Phoenix AZ and Indianapolis IN. We synthesize this in a new computer-based Resilient Infrastructure Simulation Environment (RISE) to allow individuals, groups (including students) and experts to test different network design configurations and crisis response approaches. By observing simulated failures and best performances, we expect a generalizable understanding of resilience may emerge that yields a measureable understanding of the sensing, anticipating, adapting, and learning processes that are essential to resilient organizations.

  15. Linking Simulation with Formal Verification and Modeling of Wireless Sensor Network in TLA+

    Science.gov (United States)

    Martyna, Jerzy

    In this paper, we present the results of the simulation of a wireless sensor network based on the flooding technique and SPIN protocols. The wireless sensor network was specified and verified by means of the TLA+ specification language [1]. For a model of wireless sensor network built this way simulation was carried with the help of specially constructed software tools. The obtained results allow us to predict the behaviour of the wireless sensor network in various topologies and spatial densities. Visualization of the output data enable precise examination of some phenomenas in wireless sensor networks, such as a hidden terminal, etc.

  16. Traffic volume estimation using network interpolation techniques.

    Science.gov (United States)

    2013-12-01

    Kriging method is a frequently used interpolation methodology in geography, which enables estimations of unknown values at : certain places with the considerations of distances among locations. When it is used in transportation field, network distanc...

  17. C Library for Simulated Evolution of Biological Networks

    OpenAIRE

    Chandran, Deepak; Sauro, Herbert M.

    2010-01-01

    Simulated evolution of biological networks can be used to generate functional networks as well as investigate hypotheses regarding natural evolution. A handful of studies have shown how simulated evolution can be used for studying the functional space spanned by biochemical networks, studying natural evolution, or designing new synthetic networks. If there was a method for easily performing such studies, it can allow the community to further experiment with simulated evolution and explore all...

  18. Motorway Network Simulation Using Bluetooth Data

    Directory of Open Access Journals (Sweden)

    Karakikes Ioannis

    2016-09-01

    Full Text Available This paper describes a systematic calibration process of a Vissim model, based on data derived from BT detectors. It also provides instructions how to calibrate and validate a highway network model based upon a case study and establishes an example for practitioners that are interested in designing highway networks with micro simulation tools. Within this case study, a 94,5 % proper calibration to all segments was achieved First, an overview of the systematic calibration approach that will be followed is presented. A description of the given datasets follows. Finally, model’s systematic calibration and validation based on BT data from segments under free flow conditions is thoroughly explained. The delivered calibrated Vissim model acts as a test bed, which in combination with other analysis tools can be used for potential future exploitation regarding transportation related purposes.

  19. Techniques and Simulation Models in Risk Management

    Directory of Open Access Journals (Sweden)

    Mirela GHEORGHE

    2012-12-01

    Full Text Available In the present paper, the scientific approach of the research starts from the theoretical framework of the simulation concept and then continues in the setting of the practical reality, thus providing simulation models for a broad range of inherent risks specific to any organization and simulation of those models, using the informatics instrument @Risk (Palisade. The reason behind this research lies in the need for simulation models that will allow the person in charge with decision taking inside the field of risk management to adopt new corporate strategies which will answer their current needs. The results of the research are represented by two simulation models specific to risk management. The first model follows the net profit simulation as well as simulating the impact that could be generated by a series of inherent risk factors such as losing some important colleagues, a drop in selling prices, a drop in sales volume, retrofitting, and so on. The second simulation model is associated to the IT field, through the analysis of 10 informatics threats, in order to evaluate the potential financial loss.

  20. Learning in innovation networks: Some simulation experiments

    Science.gov (United States)

    Gilbert, Nigel; Ahrweiler, Petra; Pyka, Andreas

    2007-05-01

    According to the organizational learning literature, the greatest competitive advantage a firm has is its ability to learn. In this paper, a framework for modeling learning competence in firms is presented to improve the understanding of managing innovation. Firms with different knowledge stocks attempt to improve their economic performance by engaging in radical or incremental innovation activities and through partnerships and networking with other firms. In trying to vary and/or to stabilize their knowledge stocks by organizational learning, they attempt to adapt to environmental requirements while the market strongly selects on the results. The simulation experiments show the impact of different learning activities, underlining the importance of innovation and learning.

  1. Mobile-ip Aeronautical Network Simulation Study

    Science.gov (United States)

    Ivancic, William D.; Tran, Diepchi T.

    2001-01-01

    NASA is interested in applying mobile Internet protocol (mobile-ip) technologies to its space and aeronautics programs. In particular, mobile-ip will play a major role in the Advanced Aeronautic Transportation Technology (AATT), the Weather Information Communication (WINCOMM), and the Small Aircraft Transportation System (SATS) aeronautics programs. This report presents the results of a simulation study of mobile-ip for an aeronautical network. The study was performed to determine the performance of the transmission control protocol (TCP) in a mobile-ip environment and to gain an understanding of how long delays, handoffs, and noisy channels affect mobile-ip performance.

  2. Knapsack - TOPSIS Technique for Vertical Handover in Heterogeneous Wireless Network

    Science.gov (United States)

    2015-01-01

    In a heterogeneous wireless network, handover techniques are designed to facilitate anywhere/anytime service continuity for mobile users. Consistent best-possible access to a network with widely varying network characteristics requires seamless mobility management techniques. Hence, the vertical handover process imposes important technical challenges. Handover decisions are triggered for continuous connectivity of mobile terminals. However, bad network selection and overload conditions in the chosen network can cause fallout in the form of handover failure. In order to maintain the required Quality of Service during the handover process, decision algorithms should incorporate intelligent techniques. In this paper, a new and efficient vertical handover mechanism is implemented using a dynamic programming method from the operation research discipline. This dynamic programming approach, which is integrated with the Technique to Order Preference by Similarity to Ideal Solution (TOPSIS) method, provides the mobile user with the best handover decisions. Moreover, in this proposed handover algorithm a deterministic approach which divides the network into zones is incorporated into the network server in order to derive an optimal solution. The study revealed that this method is found to achieve better performance and QoS support to users and greatly reduce the handover failures when compared to the traditional TOPSIS method. The decision arrived at the zone gateway using this operational research analytical method (known as the dynamic programming knapsack approach together with Technique to Order Preference by Similarity to Ideal Solution) yields remarkably better results in terms of the network performance measures such as throughput and delay. PMID:26237221

  3. Anomaly Detection Techniques for Ad Hoc Networks

    Science.gov (United States)

    Cai, Chaoli

    2009-01-01

    Anomaly detection is an important and indispensable aspect of any computer security mechanism. Ad hoc and mobile networks consist of a number of peer mobile nodes that are capable of communicating with each other absent a fixed infrastructure. Arbitrary node movements and lack of centralized control make them vulnerable to a wide variety of…

  4. Neural network stochastic simulation applied for quantifying uncertainties

    Directory of Open Access Journals (Sweden)

    N Foudil-Bey

    2016-09-01

    Full Text Available Generally the geostatistical simulation methods are used to generate several realizations of physical properties in the sub-surface, these methods are based on the variogram analysis and limited to measures correlation between variables at two locations only. In this paper, we propose a simulation of properties based on supervised Neural network training at the existing drilling data set. The major advantage is that this method does not require a preliminary geostatistical study and takes into account several points. As a result, the geological information and the diverse geophysical data can be combined easily. To do this, we used a neural network with multi-layer perceptron architecture like feed-forward, then we used the back-propagation algorithm with conjugate gradient technique to minimize the error of the network output. The learning process can create links between different variables, this relationship can be used for interpolation of the properties on the one hand, or to generate several possible distribution of physical properties on the other hand, changing at each time and a random value of the input neurons, which was kept constant until the period of learning. This method was tested on real data to simulate multiple realizations of the density and the magnetic susceptibility in three-dimensions at the mining camp of Val d'Or, Québec (Canada.

  5. Localization in wireless sensor networks: Classification and evaluation of techniques

    National Research Council Canada - National Science Library

    Ewa Niewiadomska-Szynkiewicz

    2012-01-01

      Localization in wireless sensor networks: Classification and evaluation of techniques Recent advances in technology have enabled the development of low cost, low power and multi functional wireless sensing devices...

  6. Techniques in micromagnetic simulation and analysis

    Science.gov (United States)

    Kumar, D.; Adeyeye, A. O.

    2017-08-01

    Advances in nanofabrication now allow us to manipulate magnetic material at micro- and nanoscales. As the steps of design, modelling and simulation typically precede that of fabrication, these improvements have also granted a significant boost to the methods of micromagnetic simulations (MSs) and analyses. The increased availability of massive computational resources has been another major contributing factor. Magnetization dynamics at micro- and nanoscale is described by the Landau-Lifshitz-Gilbert (LLG) equation, which is an ordinary differential equation (ODE) in time. Several finite difference method (FDM) and finite element method (FEM) based LLG solvers are now widely use to solve different kind of micromagnetic problems. In this review, we present a few patterns in the ways MSs are being used in the pursuit of new physics. An important objective of this review is to allow one to make a well informed decision on the details of simulation and analysis procedures needed to accomplish a given task using computational micromagnetics. We also examine the effect of different simulation parameters to underscore and extend some best practices. Lastly, we examine different methods of micromagnetic analyses which are used to process simulation results in order to extract physically meaningful and valuable information.

  7. Outlier Detection Techniques For Wireless Sensor Networks: A Survey

    NARCIS (Netherlands)

    Zhang, Y.; Meratnia, Nirvana; Havinga, Paul J.M.

    2008-01-01

    In the field of wireless sensor networks, measurements that significantly deviate from the normal pattern of sensed data are considered as outliers. The potential sources of outliers include noise and errors, events, and malicious attacks on the network. Traditional outlier detection techniques are

  8. Cognitive Heterogeneous Reconfigurable Optical Networks (CHRON): Enabling Technologies and Techniques

    DEFF Research Database (Denmark)

    Tafur Monroy, Idelfonso; Zibar, Darko; Guerrero Gonzalez, Neil

    2011-01-01

    We present the approach of cognition applied to heterogeneous optical networks developed in the framework of the EU project CHRON: Cognitive Heterogeneous Reconfigurable Optical Network. We introduce and discuss in particular the technologies and techniques that will enable a cognitive optical...

  9. Survey of Green Radio Communications Networks: Techniques and Recent Advances

    Directory of Open Access Journals (Sweden)

    Mohammed H. Alsharif

    2013-01-01

    Full Text Available Energy efficiency in cellular networks has received significant attention from both academia and industry because of the importance of reducing the operational expenditures and maintaining the profitability of cellular networks, in addition to making these networks “greener.” Because the base station is the primary energy consumer in the network, efforts have been made to study base station energy consumption and to find ways to improve energy efficiency. In this paper, we present a brief review of the techniques that have been used recently to improve energy efficiency, such as energy-efficient power amplifier techniques, time-domain techniques, cell switching, management of the physical layer through multiple-input multiple-output (MIMO management, heterogeneous network architectures based on Micro-Pico-Femtocells, cell zooming, and relay techniques. In addition, this paper discusses the advantages and disadvantages of each technique to contribute to a better understanding of each of the techniques and thereby offer clear insights to researchers about how to choose the best ways to reduce energy consumption in future green radio networks.

  10. Criminal Network Investigation: Processes, Tools, and Techniques

    DEFF Research Database (Denmark)

    Petersen, Rasmus Rosenqvist

    intelligence products that can be disseminated to their customers. Investigators deal with an increasing amount of information from a variety of sources, especially the Internet, all of which are important to their analysis and decision making process. But information abundance is far from the only or most...... a target-centric process model (acquisition, synthesis, sense-making, dissemination, cooperation) encouraging and supporting an iterative and incremental evolution of the criminal network across all five investigation processes. The first priority of the process model is to address the problems of linear...

  11. The design of a network emulation and simulation laboratory

    CSIR Research Space (South Africa)

    Von Solms, S

    2015-07-01

    Full Text Available The development of the Network Emulation and Simulation Laboratory is motivated by the drive to contribute to the enhancement of the security and resilience of South Africa's critical information infrastructure. The goal of the Network Emulation...

  12. Multipath Routing and Wavelength Assignment Technique in Optical WDM Mesh Networks

    Science.gov (United States)

    Kavitha, T.; Shiyamala, S.; Rajamani, V.

    2017-12-01

    A routing and wavelength assignment (RWA) technique for supporting multipath traffic in optical wavelength-division multiplexing (WDM) mesh network is proposed in this paper. The network can be preceded by accomplishing two processes: one is establishing connection node and the second one is identifying the multipath and assigning wavelength. The connection node is selected based on the load and current traffic-carrying capacity of that node. During wavelength allocation mechanism, cost function is considered as the major criterion. Based on the cost involved in every path, the wavelengths are selected such that wavelength with the minimum cost is allocated to that particular path. This technique efficiently allocates the wavelength to the selected multiple paths and the traffic is routed to the destination using multiple paths with wavelength allocation. For simulation, NS2 simulator is used by applying the optical WDM network simulator patch. The proposed multipath RWA technique is compared with the existing RWA technique. We achieved a throughput of 12,625 packets for ten numbers of wavelengths. But the existing approach achieved a throughput of 10,189 packets only for the same numbers of wavelengths. Channel utilization is more, and delay is less compared with the existing technique. Hence, the proposed method is very efficient, since the router effectively routes the traffic within the network.

  13. Simulation techniques in hyperthermia treatment planning

    NARCIS (Netherlands)

    M.M. Paulides (Maarten); J.C. Stauffer; E. Neufeld; P.F. MacCarini (Paolo); A. Kyriakou (Adamos); R.A.M. Canters (Richard); S. Diederich (Sven); J. Bakker (Jan); G.C. van Rhoon (Gerard)

    2013-01-01

    textabstractClinical trials have shown that hyperthermia (HT), i.e. an increase of tissue temperature to 39-44 °C, significantly enhance radiotherapy and chemotherapy effectiveness [1]. Driven by the developments in computational techniques and computing power, personalised hyperthermia treatment

  14. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection

    Directory of Open Access Journals (Sweden)

    Declan T. Delaney

    2016-12-01

    Full Text Available No single network solution for Internet of Things (IoT networks can provide the required level of Quality of Service (QoS for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks.

  15. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection †

    Science.gov (United States)

    Delaney, Declan T.; O’Hare, Gregory M. P.

    2016-01-01

    No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks. PMID:27916929

  16. A Framework to Implement IoT Network Performance Modelling Techniques for Network Solution Selection.

    Science.gov (United States)

    Delaney, Declan T; O'Hare, Gregory M P

    2016-12-01

    No single network solution for Internet of Things (IoT) networks can provide the required level of Quality of Service (QoS) for all applications in all environments. This leads to an increasing number of solutions created to fit particular scenarios. Given the increasing number and complexity of solutions available, it becomes difficult for an application developer to choose the solution which is best suited for an application. This article introduces a framework which autonomously chooses the best solution for the application given the current deployed environment. The framework utilises a performance model to predict the expected performance of a particular solution in a given environment. The framework can then choose an apt solution for the application from a set of available solutions. This article presents the framework with a set of models built using data collected from simulation. The modelling technique can determine with up to 85% accuracy the solution which performs the best for a particular performance metric given a set of solutions. The article highlights the fractured and disjointed practice currently in place for examining and comparing communication solutions and aims to open a discussion on harmonising testing procedures so that different solutions can be directly compared and offers a framework to achieve this within IoT networks.

  17. Techniques for labeling of optical signals in bust switched networks

    DEFF Research Database (Denmark)

    Tafur Monroy, Idelfonso; Koonen, A. M. J.; Zhang, Jianfeng

    2003-01-01

    We present a review of significant issues related to labeled optical burst switched (LOBS) networks and technologies enabling future optical internet networks. Labeled optical burst switching provides a quick and efficient forwarding mechanism of IP packets/bursts over wavelength division...... multiplexed (WDM) networks due to its single forwarding algorithm, thus yielding low latency, and it enables scaling to terabit rates. Moreover, LOBS is compatible with the general multiprotocol label switching (GMPLS) framework for a unified control plane. We present a review on techniques for labeling...... of optical signals for LOBS networks, including experimental results, we discuss as well issues for further research....

  18. The design and implementation of a network simulation platform

    CSIR Research Space (South Africa)

    Von Solms, S

    2013-11-01

    Full Text Available of the NS. A discussion on the various aspects of the NS is discussed subsequently. A. Topology It can be seen from Figure 1 that the developed NS comprises of multiple network sections, namely Internal User Networks/Local Area Networks (LANs) connected...]. This will provide a realistic platform which is isolated, more controlled and more predictable than implementation across live networks [4]. In this paper we discuss the development of such a network simulation environment, called a network simulator (NS...

  19. Characterization of Background Traffic in Hybrid Network Simulation

    National Research Council Canada - National Science Library

    Lauwens, Ben; Scheers, Bart; Van de Capelle, Antoine

    2006-01-01

    .... Two approaches are common: discrete event simulation and fluid approximation. A discrete event simulation generates a huge amount of events for a full-blown battlefield communication network resulting in a very long runtime...

  20. Creating real network with expected degree distribution: A statistical simulation

    OpenAIRE

    WenJun Zhang; GuangHua Liu

    2012-01-01

    The degree distribution of known networks is one of the focuses in network analysis. However, its inverse problem, i.e., to create network from known degree distribution has not yet been reported. In present study, a statistical simulation algorithm was developed to create real network with expected degree distribution. It is aniteration procedure in which a real network, with the least deviation of actual degree distribution to expected degree distribution, was created. Random assignment was...

  1. Information diversity in structure and dynamics of simulated neuronal networks.

    Science.gov (United States)

    Mäki-Marttunen, Tuomo; Aćimović, Jugoslava; Nykter, Matti; Kesseli, Juha; Ruohonen, Keijo; Yli-Harja, Olli; Linne, Marja-Leena

    2011-01-01

    Neuronal networks exhibit a wide diversity of structures, which contributes to the diversity of the dynamics therein. The presented work applies an information theoretic framework to simultaneously analyze structure and dynamics in neuronal networks. Information diversity within the structure and dynamics of a neuronal network is studied using the normalized compression distance. To describe the structure, a scheme for generating distance-dependent networks with identical in-degree distribution but variable strength of dependence on distance is presented. The resulting network structure classes possess differing path length and clustering coefficient distributions. In parallel, comparable realistic neuronal networks are generated with NETMORPH simulator and similar analysis is done on them. To describe the dynamics, network spike trains are simulated using different network structures and their bursting behaviors are analyzed. For the simulation of the network activity the Izhikevich model of spiking neurons is used together with the Tsodyks model of dynamical synapses. We show that the structure of the simulated neuronal networks affects the spontaneous bursting activity when measured with bursting frequency and a set of intraburst measures: the more locally connected networks produce more and longer bursts than the more random networks. The information diversity of the structure of a network is greatest in the most locally connected networks, smallest in random networks, and somewhere in between in the networks between order and disorder. As for the dynamics, the most locally connected networks and some of the in-between networks produce the most complex intraburst spike trains. The same result also holds for sparser of the two considered network densities in the case of full spike trains.

  2. Information Diversity in Structure and Dynamics of Simulated Neuronal Networks

    Directory of Open Access Journals (Sweden)

    Tuomo eMäki-Marttunen

    2011-06-01

    Full Text Available Neuronal networks exhibit a wide diversity of structures, which contributes to the diversity of the dynamics therein. The presented work applies an information theoretic framework to simultaneously analyze structure and dynamics in neuronal networks. Information diversity within the structure and dynamics of a neuronal network is studied using the normalized compression distance (NCD. To describe the structure, a scheme for generating distance-dependent networks with identical in-degree distribution but variable strength of dependence on distance is presented. The resulting network structure classes possess differing path length and clustering coefficient distributions. In parallel, comparable realistic neuronal networks are generated with NETMORPH simulator and similar analysis is done on them. To describe the dynamics, network spike trains are simulated using different network structures and their bursting behaviours are analyzed. For the simulation of the network activity the Izhikevich model of spiking neurons is used together with the Tsodyks model of dynamical synapses.We show that the structure of the simulated neuronal networks affects the spontaneous bursting activity when measured with bursting frequency and a set of intraburst measures: the more locally connected networks produce more and longer bursts than the more random networks. The information diversity of the structure of a network is greatest in the most locally connected networks, smallest in random networks, and somewhere in between in the networks between order and disorder. As for the dynamics, the most locally connected networks and some of the in-between networks produce the most complex intraburst spike trains. The same result also holds for sparser of the two considered network densities in the case of full spike trains.

  3. Traffic Simulations on Parallel Computers Using Domain Decomposition Techniques

    Science.gov (United States)

    1995-01-01

    Large scale simulations of Intelligent Transportation Systems (ITS) can only be acheived by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic...

  4. An analog simulation technique for distributed flow systems

    DEFF Research Database (Denmark)

    Jørgensen, Sten Bay; Kümmel, Mogens

    1973-01-01

    Simulation of distributed flow systems in chemical engine­ering has been applied more and more during the last decade as computer techniques have developed [l]. The applications have served the purpose of identification of process dynamics and parameter estimation as well as improving process...... and process control design. Although the conventional analog computer has been expanded with hybrid techniques and digital simulation languages have appeared, none of these has demonstrated superiority in simulating distributed flow systems in general [l]. Conventional analog techniques are expensive......, especially when flow forcing and nonlinearities are simulated. Digital methods on the other. hand are time consuming. The purpose of this application note is to describe the hardware for the analog principle proposed by {2, 3]. Using this hardware ffowforcing is readily simulated, which was not feasible...

  5. Acceleration techniques for dependability simulation. M.S. Thesis

    Science.gov (United States)

    Barnette, James David

    1995-01-01

    As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.

  6. Semi-Analytic Techniques for Fast MATLAB Simulations

    OpenAIRE

    Borio, Daniele; Cano, Eduardo

    2012-01-01

    Semi-analytic techniques are a powerful tool for the analysis of complex systems. In the semi-analytic framework, the knowledge of the system under analysis is exploited to reduce the computational load and complexity that full Monte Carlo simulations would require. In this way, the strengths of both analytical and Monte Carlo methods are effectively combined. The main goal of this chapter is to provide a general overview of semi-analytic techniques for the simulation of communications sys...

  7. HADES, A Code for Simulating a Variety of Radiographic Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Aufderheide, M B; Henderson, G; von Wittenau, A; Slone, D M; Barty, A; Martz, Jr., H E

    2004-10-28

    It is often useful to simulate radiographic images in order to optimize imaging trade-offs and to test tomographic techniques. HADES is a code that simulates radiography using ray tracing techniques. Although originally developed to simulate X-Ray transmission radiography, HADES has grown to simulate neutron radiography over a wide range of energy, proton radiography in the 1 MeV to 100 GeV range, and recently phase contrast radiography using X-Rays in the keV energy range. HADES can simulate parallel-ray or cone-beam radiography through a variety of mesh types, as well as through collections of geometric objects. HADES was originally developed for nondestructive evaluation (NDE) applications, but could be a useful tool for simulation of portal imaging, proton therapy imaging, and synchrotron studies of tissue. In this paper we describe HADES' current capabilities and discuss plans for a major revision of the code.

  8. Simulation Of Networking Protocols On Software Emulated Network Stack

    Directory of Open Access Journals (Sweden)

    Hrushikesh Nimkar

    2015-08-01

    Full Text Available With the increasing number and complexity of network based applications the need to easy configuration development and integration of network applications has taken a high precedence. Trivial activities such as configuration can be carried out efficiently if network services are software based rather than hardware based. Project aims at enabling the network engineers to easily include network functionalities into hisher configuration and define hisher own network stack without using the kernel network stack. Having thought of this we have implemented two functionalities UPNP and MDNS. The multicast Domain Name System MDNS resolves host names to IP addresses within small ad-hoc networks and without having need of special DNS server and its configuration. MDNS application provides every host with functionality to register itself to the router make a multicast DNS request and its resolution. To make adding network devices and networked programs to a network as easy as it is to plug in a piece of hardware into a PC we make use of UPnP. The devices and programs find out about the network setup and other networked devices and programs through discovery and advertisements of services and configure themselves accordingly. UPNP application provides every host with functionality of discovering services of other hosts and serving requests on demand. To implement these applications we have used snabbswitch framework which an open source virtualized ethernet networking stack.

  9. Cooperative Technique Based on Sensor Selection in Wireless Sensor Network

    OpenAIRE

    ISLAM, M. R.; KIM, J.

    2009-01-01

    An energy efficient cooperative technique is proposed for the IEEE 1451 based Wireless Sensor Networks. Selected numbers of Wireless Transducer Interface Modules (WTIMs) are used to form a Multiple Input Single Output (MISO) structure wirelessly connected with a Network Capable Application Processor (NCAP). Energy efficiency and delay of the proposed architecture are derived for different combination of cluster size and selected number of WTIMs. Optimized constellation parameters are used for...

  10. High Dimensional Modulation and MIMO Techniques for Access Networks

    DEFF Research Database (Denmark)

    Binti Othman, Maisara

    Exploration of advanced modulation formats and multiplexing techniques for next generation optical access networks are of interest as promising solutions for delivering multiple services to end-users. This thesis addresses this from two different angles: high dimensionality carrierless amplitudep...... wired-wireless access networks....... the capacity per wavelength of the femto-cell network. Bit rate up to 1.59 Gbps with fiber-wireless transmission over 1 m air distance is demonstrated. The results presented in this thesis demonstrate the feasibility of high dimensionality CAP in increasing the number of dimensions and their potentially...... to be utilized for multiple service allocation to different users. MIMO multiplexing techniques with OFDM provides the scalability in increasing spectral efficiency and bit rates for RoF systems. High dimensional CAP and MIMO multiplexing techniques are two promising solutions for supporting wired and hybrid...

  11. Simulation technique for hard-disk models in two dimensions

    DEFF Research Database (Denmark)

    Fraser, Diane P.; Zuckermann, Martin J.; Mouritsen, Ole G.

    1990-01-01

    A method is presented for studying hard-disk systems by Monte Carlo computer-simulation techniques within the NpT ensemble. The method is based on the Voronoi tesselation, which is dynamically maintained during the simulation. By an analysis of the Voronoi statistics, a quantity is identified tha...

  12. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.

    Science.gov (United States)

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  13. Simulation and Evaluation of Ethernet Passive Optical Network

    Directory of Open Access Journals (Sweden)

    Salah A. Jaro Alabady

    2013-05-01

    Full Text Available      This paper studies simulation and evaluation of Ethernet Passive Optical Network (EPON system, IEEE802.3ah based OPTISM 3.6 simulation program. The simulation program is used in this paper to build a typical ethernet passive optical network, and to evaluate the network performance when using the (1580, 1625 nm wavelength instead of (1310, 1490 nm that used in Optical Line Terminal (OLT and Optical Network Units (ONU's in system architecture of Ethernet passive optical network at different bit rate and different fiber optic length. The results showed enhancement in network performance by increase the number of nodes (subscribers connected to the network, increase the transmission distance, reduces the received power and reduces the Bit Error Rate (BER.   

  14. Simulation of wind turbine wakes using the actuator line technique

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Mikkelsen, Robert Flemming; Henningson, Dan S.

    2015-01-01

    The actuator line technique was introduced as a numerical tool to be employed in combination with large eddy simulations to enable the study of wakes and wake interaction in wind farms. The technique is today largely used for studying basic features of wakes as well as for making performance...

  15. Advanced network programming principles and techniques : network application programming with Java

    CERN Document Server

    Ciubotaru, Bogdan

    2013-01-01

    Answering the need for an accessible overview of the field, this text/reference presents a manageable introduction to both the theoretical and practical aspects of computer networks and network programming. Clearly structured and easy to follow, the book describes cutting-edge developments in network architectures, communication protocols, and programming techniques and models, supported by code examples for hands-on practice with creating network-based applications. Features: presents detailed coverage of network architectures; gently introduces the reader to the basic ideas underpinning comp

  16. A technique for choosing an option for SDH network upgrade

    Directory of Open Access Journals (Sweden)

    V. A. Bulanov

    2014-01-01

    Full Text Available Rapidly developing data transmission technologies result in making the network equipment modernization inevitable. There are various options to upgrade the SDH networks, for example, by increasing the capacity of network overloaded sites, the entire network capacity by replacement of the equipment or by creation of a parallel network, by changing the network structure with the organization of multilevel hierarchy of a network, etc. All options vary in a diversity of parameters starting with the solution cost and ending with the labor intensiveness of their realization. Thus, there are no certain standard approaches to the rules to choose an option for the network development. The article offers the technique for choosing the SHD network upgrade based on method of expert evaluations using as a tool the software complex that allows us to have quickly the quantitative characteristics of proposed network option. The technique is as follows:1. Forming a perspective matrix of services inclination to the SDH networks.2. Developing the several possible options for a network modernization.3. Formation of the list of criteria and a definition of indicators to characterize them by two groups, namely costs of the option implementation and arising losses; positive effect from the option introduction.4. Criteria weight coefficients purpose.5. Indicators value assessment within each criterion for each option by each expert. Rationing of the obtained values of indicators in relation to the maximum value of an indicator among all options.6. Calculating the integrated indicators of for each option by criteria groups.7. Creating a set of Pareto by drawing two criteria groups of points, which correspond to all options in the system of coordinates on the plane. Option choice.In implementation of point 2 the indicators derivation owing to software complex plays a key role. This complex should produce a structure of the network equipment, types of multiplexer sections

  17. An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks.

    Science.gov (United States)

    Abba, Sani; Lee, Jeong-A

    2015-08-18

    We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network.

  18. Exploring machine-learning-based control plane intrusion detection techniques in software defined optical networks

    Science.gov (United States)

    Zhang, Huibin; Wang, Yuqiao; Chen, Haoran; Zhao, Yongli; Zhang, Jie

    2017-12-01

    In software defined optical networks (SDON), the centralized control plane may encounter numerous intrusion threatens which compromise the security level of provisioned services. In this paper, the issue of control plane security is studied and two machine-learning-based control plane intrusion detection techniques are proposed for SDON with properly selected features such as bandwidth, route length, etc. We validate the feasibility and efficiency of the proposed techniques by simulations. Results show an accuracy of 83% for intrusion detection can be achieved with the proposed machine-learning-based control plane intrusion detection techniques.

  19. Transmission network expansion planning with simulation optimization

    Energy Technology Data Exchange (ETDEWEB)

    Bent, Russell W [Los Alamos National Laboratory; Berscheid, Alan [Los Alamos National Laboratory; Toole, G. Loren [Los Alamos National Laboratory

    2010-01-01

    Within the electric power literatW''e the transmi ssion expansion planning problem (TNEP) refers to the problem of how to upgrade an electric power network to meet future demands. As this problem is a complex, non-linear, and non-convex optimization problem, researchers have traditionally focused on approximate models. Often, their approaches are tightly coupled to the approximation choice. Until recently, these approximations have produced results that are straight-forward to adapt to the more complex (real) problem. However, the power grid is evolving towards a state where the adaptations are no longer easy (i.e. large amounts of limited control, renewable generation) that necessitates new optimization techniques. In this paper, we propose a generalization of the powerful Limited Discrepancy Search (LDS) that encapsulates the complexity in a black box that may be queJied for information about the quality of a proposed expansion. This allows the development of a new optimization algOlitlun that is independent of the underlying power model.

  20. FUMET: A fuzzy network module extraction technique for gene ...

    Indian Academy of Sciences (India)

    FUMET: A fuzzy network module extraction technique for gene expression data. Priyakshi Mahanta Hasin Afzal Ahmed ... Bhattacharyya1 Ashish Ghosh2. Department of Computer Science and Engineering, Tezpur University, Napaam 784 028, India; Machine Intelligent Unit, Indian Statistical Institute, Kolkata 700 108, India ...

  1. Modeling radio link performance in UMTS W-CDMA network simulations

    DEFF Research Database (Denmark)

    Klingenbrunn, Thomas; Mogensen, Preben Elgaard

    2000-01-01

    This article presents a method to model the W-CDMA radio receiver performance, which is usable in network simulation tools for third generation mobile cellular systems. The method represents a technique to combine link level simulations with network level simulations. The method is derived from [1......], which defines a stochastic mapping function from a Signal-to-Interference Ratio into a Bit-Error-Rate for a TDMA system. However, in order to work in a W-CDMA based system, the fact that the Multiple-Access Interference in downlink consists of both Gaussian inter-cell interference and orthogonal intra...

  2. Simulation of Missile Autopilot with Two-Rate Hybrid Neural Network System

    Directory of Open Access Journals (Sweden)

    ASTROV, I.

    2007-04-01

    Full Text Available This paper proposes a two-rate hybrid neural network system, which consists of two artificial neural network subsystems. These neural network subsystems are used as the dynamic subsystems controllers.1 This is because such neuromorphic controllers are especially suitable to control complex systems. An illustrative example - two-rate neural network hybrid control of decomposed stochastic model of a rigid guided missile over different operating conditions - was carried out using the proposed two-rate state-space decomposition technique. This example demonstrates that this research technique results in simplified low-order autonomous control subsystems with various speeds of actuation, and shows the quality of the proposed technique. The obtained results show that the control tasks for the autonomous subsystems can be solved more qualitatively than for the original system. The simulation and animation results with use of software package Simulink demonstrate that this research technique would work for real-time stochastic systems.

  3. Whitelists Based Multiple Filtering Techniques in SCADA Sensor Networks

    Directory of Open Access Journals (Sweden)

    DongHo Kang

    2014-01-01

    Full Text Available Internet of Things (IoT consists of several tiny devices connected together to form a collaborative computing environment. Recently IoT technologies begin to merge with supervisory control and data acquisition (SCADA sensor networks to more efficiently gather and analyze real-time data from sensors in industrial environments. But SCADA sensor networks are becoming more and more vulnerable to cyber-attacks due to increased connectivity. To safely adopt IoT technologies in the SCADA environments, it is important to improve the security of SCADA sensor networks. In this paper we propose a multiple filtering technique based on whitelists to detect illegitimate packets. Our proposed system detects the traffic of network and application protocol attacks with a set of whitelists collected from normal traffic.

  4. A configurable simulation environment for the efficient simulation of large-scale spiking neural networks on graphics processors.

    Science.gov (United States)

    Nageswaran, Jayram Moorkanikara; Dutt, Nikil; Krichmar, Jeffrey L; Nicolau, Alex; Veidenbaum, Alexander V

    2009-01-01

    Neural network simulators that take into account the spiking behavior of neurons are useful for studying brain mechanisms and for various neural engineering applications. Spiking Neural Network (SNN) simulators have been traditionally simulated on large-scale clusters, super-computers, or on dedicated hardware architectures. Alternatively, Compute Unified Device Architecture (CUDA) Graphics Processing Units (GPUs) can provide a low-cost, programmable, and high-performance computing platform for simulation of SNNs. In this paper we demonstrate an efficient, biologically realistic, large-scale SNN simulator that runs on a single GPU. The SNN model includes Izhikevich spiking neurons, detailed models of synaptic plasticity and variable axonal delay. We allow user-defined configuration of the GPU-SNN model by means of a high-level programming interface written in C++ but similar to the PyNN programming interface specification. PyNN is a common programming interface developed by the neuronal simulation community to allow a single script to run on various simulators. The GPU implementation (on NVIDIA GTX-280 with 1 GB of memory) is up to 26 times faster than a CPU version for the simulation of 100K neurons with 50 Million synaptic connections, firing at an average rate of 7 Hz. For simulation of 10 Million synaptic connections and 100K neurons, the GPU SNN model is only 1.5 times slower than real-time. Further, we present a collection of new techniques related to parallelism extraction, mapping of irregular communication, and network representation for effective simulation of SNNs on GPUs. The fidelity of the simulation results was validated on CPU simulations using firing rate, synaptic weight distribution, and inter-spike interval analysis. Our simulator is publicly available to the modeling community so that researchers will have easy access to large-scale SNN simulations.

  5. A simulation study of TaMAC protocol using network simulator 2.

    Science.gov (United States)

    Ullah, Sana; Kwak, Kyung Sup

    2012-10-01

    A Wireless Body Area Network (WBAN) is expected to play a significant role in future healthcare system. It interconnects low-cost and intelligent sensor nodes in, on, or around a human body to serve a variety of medical applications. It can be used to diagnose and treat patients with chronic diseases such as hypertensions, diabetes, and cardiovascular diseases. The lightweight sensor nodes integrated in WBAN require low-power operation, which can be achieved using different optimization techniques. We introduce a Traffic-adaptive MAC protocol (TaMAC) for WBAN that supports dual wakeup mechanisms for normal, emergency, and on-demand traffic. In this letter, the TaMAC protocol is simulated using a well-known Network Simulator 2 (NS-2). The problem of multiple emergency nodes is solved using both wakeup radio and CSMA/CA protocol. The power consumption, delay, and throughput performance are closely compared with beacon-enabled IEEE 802.15.4 MAC protocol using extensive simulations.

  6. Graphical user interface for wireless sensor networks simulator

    Science.gov (United States)

    Paczesny, Tomasz; Paczesny, Daniel; Weremczuk, Jerzy

    2008-01-01

    Wireless Sensor Networks (WSN) are currently very popular area of development. It can be suited in many applications form military through environment monitoring, healthcare, home automation and others. Those networks, when working in dynamic, ad-hoc model, need effective protocols which must differ from common computer networks algorithms. Research on those protocols would be difficult without simulation tool, because real applications often use many nodes and tests on such a big networks take much effort and costs. The paper presents Graphical User Interface (GUI) for simulator which is dedicated for WSN studies, especially in routing and data link protocols evaluation.

  7. A Flexible System for Simulating Aeronautical Telecommunication Network

    Science.gov (United States)

    Maly, Kurt; Overstreet, C. M.; Andey, R.

    1998-01-01

    At Old Dominion University, we have built Aeronautical Telecommunication Network (ATN) Simulator with NASA being the fund provider. It provides a means to evaluate the impact of modified router scheduling algorithms on the network efficiency, to perform capacity studies on various network topologies and to monitor and study various aspects of ATN through graphical user interface (GUI). In this paper we describe briefly about the proposed ATN model and our abstraction of this model. Later we describe our simulator architecture highlighting some of the design specifications, scheduling algorithms and user interface. At the end, we have provided the results of performance studies on this simulator.

  8. Parallel discrete-event simulation of FCFS stochastic queueing networks

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Physical systems are inherently parallel. Intuition suggests that simulations of these systems may be amenable to parallel execution. The parallel execution of a discrete-event simulation requires careful synchronization of processes in order to ensure the execution's correctness; this synchronization can degrade performance. Largely negative results were recently reported in a study which used a well-known synchronization method on queueing network simulations. Discussed here is a synchronization method (appointments), which has proven itself to be effective on simulations of FCFS queueing networks. The key concept behind appointments is the provision of lookahead. Lookahead is a prediction on a processor's future behavior, based on an analysis of the processor's simulation state. It is shown how lookahead can be computed for FCFS queueing network simulations, give performance data that demonstrates the method's effectiveness under moderate to heavy loads, and discuss performance tradeoffs between the quality of lookahead, and the cost of computing lookahead.

  9. A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics

    Science.gov (United States)

    Kobayashi, Takahisa; Simon, Donald L.

    2001-01-01

    In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.

  10. IFSAR Simulation Using the Shooting and Bouncing Ray Technique

    Science.gov (United States)

    Houshmand, Bijan; Bhalla, Rajan; Ling, Hao

    2000-01-01

    Interferometric Synthetic Aperture Radar (IFSAR) is a technique that allows an automated way to carry out terrain mapping. IFSAR is carried out by first generating a SAR image pair from two antennas that are spatially separated. The phase difference between the SAR image pair is proportional to the topography. After registering the SAR images, the difference in phase in each pixel is extracted to generate an interferogram. Since the phase can only be measured within 2pi radians, phase unwrapping is carried out to extract the absolute phase for each pixel that will be proportional to the local height. While IFSAR algorithm is typically applied to measurement data, it is useful to develop an IFSAR simulator to develop a better understanding of the IFSAR technique. The IFSAR simulator can be used in choosing system parameters, experimenting with processing procedures and mission planning. In this paper we will present an IFSAR simulation methodology to simulate the interferogram based on the shooting and bouncing ray (SBR) technique. SBR is a standard ray-tracing technique used to simulate scattering from large, complex targets. SBR is carried out by shooting rays at the target or scene. At the exit point of each ray, a ray-tube integration is done to find its contribution to the total field. A fast algorithm has been developed for the SBR for simulating SAR images of complex targets. In the IFSAR simulation, we build upon the fast SAR simulation technique. Given the antenna pair configuration, radar system parameters and the geometrical description of the scene, we first simulate two SAR images from each antenna. After post processing the two SAR images, we generate an interferogram. Phase unwrapping is then performed on the interferogram to arrive at the desired terrain map. We will present results from the SBR-based IFSAR simulator. The results will include terrain map reconstruction of urban environments. The reconstruction will be compared to the ground truth to

  11. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations

    Science.gov (United States)

    Hahne, Jan; Helias, Moritz; Kunkel, Susanne; Igarashi, Jun; Bolten, Matthias; Frommer, Andreas; Diesmann, Markus

    2015-01-01

    Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology. PMID:26441628

  12. A Novel Interfacing Technique for Distributed Hybrid Simulations Combining EMT and Transient Stability Models

    Energy Technology Data Exchange (ETDEWEB)

    Shu, Dewu; Xie, Xiaorong; Jiang, Qirong; Huang, Qiuhua; Zhang, Chunpeng

    2018-02-01

    With steady increase of power electronic devices and nonlinear dynamic loads in large scale AC/DC systems, the traditional hybrid simulation method, which incorporates these components into a single EMT subsystem and hence causes great difficulty for network partitioning and significant deterioration in simulation efficiency. To resolve these issues, a novel distributed hybrid simulation method is proposed in this paper. The key to realize this method is a distinct interfacing technique, which includes: i) a new approach based on the two-level Schur complement to update the interfaces by taking full consideration of the couplings between different EMT subsystems; and ii) a combined interaction protocol to further improve the efficiency while guaranteeing the simulation accuracy. The advantages of the proposed method in terms of both efficiency and accuracy have been verified by using it for the simulation study of an AC/DC hybrid system including a two-terminal VSC-HVDC and nonlinear dynamic loads.

  13. Toward Designing a Quantum Key Distribution Network Simulation Model

    Directory of Open Access Journals (Sweden)

    Miralem Mehic

    2016-01-01

    Full Text Available As research in quantum key distribution network technologies grows larger and more complex, the need for highly accurate and scalable simulation technologies becomes important to assess the practical feasibility and foresee difficulties in the practical implementation of theoretical achievements. In this paper, we described the design of simplified simulation environment of the quantum key distribution network with multiple links and nodes. In such simulation environment, we analyzed several routing protocols in terms of the number of sent routing packets, goodput and Packet Delivery Ratio of data traffic flow using NS-3 simulator.

  14. Interfacing Network Simulations and Empirical Data

    Science.gov (United States)

    2009-05-01

    appropriate. The quadratic assignment procedure ( QAP ) (Krackhardt, 1987) could be used to compare the correlation between networks; however, the...Social roles and the evolution of networks in extreme and isolated environments. Mathematical Sociology, 27: 89-121. Krackhardt, D. (1987). QAP

  15. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  16. Radial basis function (RBF) neural network control for mechanical systems design, analysis and Matlab simulation

    CERN Document Server

    Liu, Jinkun

    2013-01-01

    Radial Basis Function (RBF) Neural Network Control for Mechanical Systems is motivated by the need for systematic design approaches to stable adaptive control system design using neural network approximation-based techniques. The main objectives of the book are to introduce the concrete design methods and MATLAB simulation of stable adaptive RBF neural control strategies. In this book, a broad range of implementable neural network control design methods for mechanical systems are presented, such as robot manipulators, inverted pendulums, single link flexible joint robots, motors, etc. Advanced neural network controller design methods and their stability analysis are explored. The book provides readers with the fundamentals of neural network control system design.   This book is intended for the researchers in the fields of neural adaptive control, mechanical systems, Matlab simulation, engineering design, robotics and automation. Jinkun Liu is a professor at Beijing University of Aeronautics and Astronauti...

  17. A GIS Tool for simulating Nitrogen transport along schematic Network

    Science.gov (United States)

    Tavakoly, A. A.; Maidment, D. R.; Yang, Z.; Whiteaker, T.; David, C. H.; Johnson, S.

    2012-12-01

    An automated method called the Arc Hydro Schematic Processor has been developed for water process computation on schematic networks formed from the NHDPlus and similar GIS river networks. The sechemtaic network represents the hydrologic feature on the ground and is a network of links and nodes. SchemaNodes show hydrologic features, such as catchments or stream junctions. SchemaLinks prescripe the connections between nodes. The schematic processor uses the schematic network to pass informatin through a watershed and move water or pollutants dwonstream. In addition, the schematic processor has a capability to use additional programming applied to the passed and/or received values and manipulating data trough network. This paper describes how the schemtic processor can be used to simulate nitrogen transport and transformation on river networks. For this purpose the nitrogen loads is estimated on the NHDPlus river network using the Schematic Processor coupled with the river routing model for the Texas Gulf Coast Hydrologic Region.

  18. Determination of Complex-Valued Parametric Model Coefficients Using Artificial Neural Network Technique

    Directory of Open Access Journals (Sweden)

    A. M. Aibinu

    2010-01-01

    Full Text Available A new approach for determining the coefficients of a complex-valued autoregressive (CAR and complex-valued autoregressive moving average (CARMA model coefficients using complex-valued neural network (CVNN technique is discussed in this paper. The CAR and complex-valued moving average (CMA coefficients which constitute a CARMA model are computed simultaneously from the adaptive weights and coefficients of the linear activation functions in a two-layered CVNN. The performance of the proposed technique has been evaluated using simulated complex-valued data (CVD with three different types of activation functions. The results show that the proposed method can accurately determine the model coefficients provided that the network is properly trained. Furthermore, application of the developed CVNN-based technique for MRI K-space reconstruction results in images with improve resolution.

  19. Simulated evolution of signal transduction networks.

    Directory of Open Access Journals (Sweden)

    Mohammad Mobashir

    Full Text Available Signal transduction is the process of routing information inside cells when receiving stimuli from their environment that modulate the behavior and function. In such biological processes, the receptors, after receiving the corresponding signals, activate a number of biomolecules which eventually transduce the signal to the nucleus. The main objective of our work is to develop a theoretical approach which will help to better understand the behavior of signal transduction networks due to changes in kinetic parameters and network topology. By using an evolutionary algorithm, we designed a mathematical model which performs basic signaling tasks similar to the signaling process of living cells. We use a simple dynamical model of signaling networks of interacting proteins and their complexes. We study the evolution of signaling networks described by mass-action kinetics. The fitness of the networks is determined by the number of signals detected out of a series of signals with varying strength. The mutations include changes in the reaction rate and network topology. We found that stronger interactions and addition of new nodes lead to improved evolved responses. The strength of the signal does not play any role in determining the response type. This model will help to understand the dynamic behavior of the proteins involved in signaling pathways. It will also help to understand the robustness of the kinetics of the output response upon changes in the rate of reactions and the topology of the network.

  20. Simulation of OFDM technique for wireless communication systems

    Science.gov (United States)

    Bloul, Albe; Mohseni, Saeed; Alhasson, Bader; Ayad, Mustafa; Matin, M. A.

    2010-08-01

    Orthogonal Frequency Division Multiplex (OFDM) is a modulation technique to transmit the baseband Radio signals over Fiber (RoF). Combining OFDM modulation technique and radio over fiber technology will improve future wireless communication. This technique can be implemented using laser and photodetector as optical modulator and demodulator. OFDM uses multiple sub-carriers to transmit low data rate streams in parallel, by using Quadrature Amplitude Modulation (QAM) or Phase Shift Keying (PSK). In this paper we will compare power spectrum signal and signal constellation of transmitted and received signals in RoF using Matlab and OptiSystem simulation software.

  1. Neural networks analysis on SSME vibration simulation data

    Science.gov (United States)

    Lo, Ching F.; Wu, Kewei

    1993-01-01

    The neural networks method is applied to investigate the feasibility in detecting anomalies in turbopump vibration of SSME to supplement the statistical method utilized in the prototype system. The investigation of neural networks analysis is conducted using SSME vibration data from a NASA developed numerical simulator. The limited application of neural networks to the HPFTP has also shown the effectiveness in diagnosing the anomalies of turbopump vibrations.

  2. EVALUATING AUSTRALIAN FOOTBALL LEAGUE PLAYER CONTRIBUTIONS USING INTERACTIVE NETWORK SIMULATION

    Directory of Open Access Journals (Sweden)

    Jonathan Sargent

    2013-03-01

    Full Text Available This paper focuses on the contribution of Australian Football League (AFL players to their team's on-field network by simulating player interactions within a chosen team list and estimating the net effect on final score margin. A Visual Basic computer program was written, firstly, to isolate the effective interactions between players from a particular team in all 2011 season matches and, secondly, to generate a symmetric interaction matrix for each match. Negative binomial distributions were fitted to each player pairing in the Geelong Football Club for the 2011 season, enabling an interactive match simulation model given the 22 chosen players. Dynamic player ratings were calculated from the simulated network using eigenvector centrality, a method that recognises and rewards interactions with more prominent players in the team network. The centrality ratings were recorded after every network simulation and then applied in final score margin predictions so that each player's match contribution-and, hence, an optimal team-could be estimated. The paper ultimately demonstrates that the presence of highly rated players, such as Geelong's Jimmy Bartel, provides the most utility within a simulated team network. It is anticipated that these findings will facilitate optimal AFL team selection and player substitutions, which are key areas of interest to coaches. Network simulations are also attractive for use within betting markets, specifically to provide information on the likelihood of a chosen AFL team list "covering the line".

  3. A numerical technique to simulate display pixels based on electrowetting

    NARCIS (Netherlands)

    Roghair, I.; Musterd, M.; van den Ende, Henricus T.M.; Kleijn, C.; Kleijn, C.; Kreutzer, M.T.; Mugele, Friedrich Gunther

    2015-01-01

    We present a numerical simulation technique to calculate the deformation of interfaces between a conductive and non-conductive fluid as well as the motion of liquid–liquid–solid three-phase contact lines under the influence of externally applied electric fields in electrowetting configuration. The

  4. Use of Simulation Techniques in Determining the Fleet ...

    African Journals Online (AJOL)

    Goldfields Ltd., a gold mine in Ghana, which will enable the mine to meet its waste stripping and ore production targets. The use of simulation techniques as a tool in the modeling, formulation and testing of several models in the ore and waste mining operations of the mine are demonstrated. The results obtained from the ...

  5. Interaction techniques for selecting and manipulating subgraphs in network visualizations.

    Science.gov (United States)

    McGuffin, Michael J; Jurisica, Igor

    2009-01-01

    We present a novel and extensible set of interaction techniques for manipulating visualizations of networks by selecting subgraphs and then applying various commands to modify their layout or graphical properties. Our techniques integrate traditional rectangle and lasso selection, and also support selecting a node's neighbourhood by dragging out its radius (in edges) using a novel kind of radial menu. Commands for translation, rotation, scaling, or modifying graphical properties (such as opacity) and layout patterns can be performed by using a hotbox (a transiently popped-up, semi-transparent set of widgets) that has been extended in novel ways to integrate specification of commands with 1D or 2D arguments. Our techniques require only one mouse button and one keyboard key, and are designed for fast, gestural, in-place interaction. We present the design and integration of these interaction techniques, and illustrate their use in interactive graph visualization. Our techniques are implemented in NAViGaTOR, a software package for visualizing and analyzing biological networks. An initial usability study is also reported.

  6. Dynamical graph theory networks techniques for the analysis of sparse connectivity networks in dementia

    Science.gov (United States)

    Tahmassebi, Amirhessam; Pinker-Domenig, Katja; Wengert, Georg; Lobbes, Marc; Stadlbauer, Andreas; Romero, Francisco J.; Morales, Diego P.; Castillo, Encarnacion; Garcia, Antonio; Botella, Guillermo; Meyer-Bäse, Anke

    2017-05-01

    Graph network models in dementia have become an important computational technique in neuroscience to study fundamental organizational principles of brain structure and function of neurodegenerative diseases such as dementia. The graph connectivity is reflected in the connectome, the complete set of structural and functional connections of the graph network, which is mostly based on simple Pearson correlation links. In contrast to simple Pearson correlation networks, the partial correlations (PC) only identify direct correlations while indirect associations are eliminated. In addition to this, the state-of-the-art techniques in brain research are based on static graph theory, which is unable to capture the dynamic behavior of the brain connectivity, as it alters with disease evolution. We propose a new research avenue in neuroimaging connectomics based on combining dynamic graph network theory and modeling strategies at different time scales. We present the theoretical framework for area aggregation and time-scale modeling in brain networks as they pertain to disease evolution in dementia. This novel paradigm is extremely powerful, since we can derive both static parameters pertaining to node and area parameters, as well as dynamic parameters, such as system's eigenvalues. By implementing and analyzing dynamically both disease driven PC-networks and regular concentration networks, we reveal differences in the structure of these network that play an important role in the temporal evolution of this disease. The described research is key to advance biomedical research on novel disease prediction trajectories and dementia therapies.

  7. Mathematical analysis techniques for modeling the space network activities

    Science.gov (United States)

    Foster, Lisa M.

    1992-01-01

    The objective of the present work was to explore and identify mathematical analysis techniques, and in particular, the use of linear programming. This topic was then applied to the Tracking and Data Relay Satellite System (TDRSS) in order to understand the space network better. Finally, a small scale version of the system was modeled, variables were identified, data was gathered, and comparisons were made between actual and theoretical data.

  8. Power Optimization Techniques for Next Generation Wireless Networks

    OpenAIRE

    Ratheesh R; Vetrivelan P

    2016-01-01

    The massive data traffic and the need for high speed wireless communication is increasing day by day corresponds to an exponential increase in the consumption of power by Information and Communication Technology (ICT) sector. Reducing consumption of power in wireless network is a challenging topic and has attracted the attention of researches around the globe. Many techniques like multiple-input multiple-output (MIMO), cognitive radio, cooperative heterogeneous communications and new netwo...

  9. Climate and change: simulating flooding impacts on urban transport network

    Science.gov (United States)

    Pregnolato, Maria; Ford, Alistair; Dawson, Richard

    2015-04-01

    National-scale climate projections indicate that in the future there will be hotter and drier summers, warmer and wetter winters, together with rising sea levels. The frequency of extreme weather events is expected to increase, causing severe damage to the built environment and disruption of infrastructures (Dawson, 2007), whilst population growth and changed demographics are placing new demands on urban infrastructure. It is therefore essential to ensure infrastructure networks are robust to these changes. This research addresses these challenges by focussing on the development of probabilistic tools for managing risk by modelling urban transport networks within the context of extreme weather events. This paper presents a methodology to investigate the impacts of extreme weather events on urban environment, in particular infrastructure networks, through a combination of climate simulations and spatial representations. By overlaying spatial data on hazard thresholds from a flood model and a flood safety function, mitigated by potential adaptation strategies, different levels of disruption to commuting journeys on road networks are evaluated. The method follows the Catastrophe Modelling approach and it consists of a spatial model, combining deterministic loss models and probabilistic risk assessment techniques. It can be applied to present conditions as well as future uncertain scenarios, allowing the examination of the impacts alongside socio-economic and climate changes. The hazard is determined by simulating free surface water flooding, with the software CityCAT (Glenis et al., 2013). The outputs are overlapped to the spatial locations of a simple network model in GIS, which uses journey-to-work (JTW) observations, supplemented with speed and capacity information. To calculate the disruptive effect of flooding on transport networks, a function relating water depth to safe driving car speed has been developed by combining data from experimental reports (Morris et

  10. Evaluating Australian football league player contributions using interactive network simulation.

    Science.gov (United States)

    Sargent, Jonathan; Bedford, Anthony

    2013-01-01

    This paper focuses on the contribution of Australian Football League (AFL) players to their team's on-field network by simulating player interactions within a chosen team list and estimating the net effect on final score margin. A Visual Basic computer program was written, firstly, to isolate the effective interactions between players from a particular team in all 2011 season matches and, secondly, to generate a symmetric interaction matrix for each match. Negative binomial distributions were fitted to each player pairing in the Geelong Football Club for the 2011 season, enabling an interactive match simulation model given the 22 chosen players. Dynamic player ratings were calculated from the simulated network using eigenvector centrality, a method that recognises and rewards interactions with more prominent players in the team network. The centrality ratings were recorded after every network simulation and then applied in final score margin predictions so that each player's match contribution-and, hence, an optimal team-could be estimated. The paper ultimately demonstrates that the presence of highly rated players, such as Geelong's Jimmy Bartel, provides the most utility within a simulated team network. It is anticipated that these findings will facilitate optimal AFL team selection and player substitutions, which are key areas of interest to coaches. Network simulations are also attractive for use within betting markets, specifically to provide information on the likelihood of a chosen AFL team list "covering the line ". Key pointsA simulated interaction matrix for Australian Rules football players is proposedThe simulations were carried out by fitting unique negative binomial distributions to each player pairing in a sideEigenvector centrality was calculated for each player in a simulated matrix, then for the teamThe team centrality measure adequately predicted the team's winning marginA player's net effect on margin could hence be estimated by replacing him in

  11. A computer code to simulate X-ray imaging techniques

    Energy Technology Data Exchange (ETDEWEB)

    Duvauchelle, Philippe E-mail: philippe.duvauchelle@insa-lyon.fr; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-09-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests.

  12. Slow update stochastic simulation algorithms for modeling complex biochemical networks.

    Science.gov (United States)

    Ghosh, Debraj; De, Rajat K

    2017-10-30

    The stochastic simulation algorithm (SSA) based modeling is a well recognized approach to predict the stochastic behavior of biological networks. The stochastic simulation of large complex biochemical networks is a challenge as it takes a large amount of time for simulation due to high update cost. In order to reduce the propensity update cost, we proposed two algorithms: slow update exact stochastic simulation algorithm (SUESSA) and slow update exact sorting stochastic simulation algorithm (SUESSSA). We applied cache-based linear search (CBLS) in these two algorithms for improving the search operation for finding reactions to be executed. Data structure used for incorporating CBLS is very simple and the cost of maintaining this during propensity update operation is very low. Hence, time taken during propensity updates, for simulating strongly coupled networks, is very fast; which leads to reduction of total simulation time. SUESSA and SUESSSA are not only restricted to elementary reactions, they support higher order reactions too. We used linear chain model and colloidal aggregation model to perform a comparative analysis of the performances of our methods with the existing algorithms. We also compared the performances of our methods with the existing ones, for large biochemical networks including B cell receptor and FcϵRI signaling networks. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Node Augmentation Technique in Bayesian Network Evidence Analysis and Marshaling

    Energy Technology Data Exchange (ETDEWEB)

    Keselman, Dmitry [Los Alamos National Laboratory; Tompkins, George H [Los Alamos National Laboratory; Leishman, Deborah A [Los Alamos National Laboratory

    2010-01-01

    Given a Bayesian network, sensitivity analysis is an important activity. This paper begins by describing a network augmentation technique which can simplifY the analysis. Next, we present two techniques which allow the user to determination the probability distribution of a hypothesis node under conditions of uncertain evidence; i.e. the state of an evidence node or nodes is described by a user specified probability distribution. Finally, we conclude with a discussion of three criteria for ranking evidence nodes based on their influence on a hypothesis node. All of these techniques have been used in conjunction with a commercial software package. A Bayesian network based on a directed acyclic graph (DAG) G is a graphical representation of a system of random variables that satisfies the following Markov property: any node (random variable) is independent of its non-descendants given the state of all its parents (Neapolitan, 2004). For simplicities sake, we consider only discrete variables with a finite number of states, though most of the conclusions may be generalized.

  14. PyNN: A Common Interface for Neuronal Network Simulators

    Science.gov (United States)

    Davison, Andrew P.; Brüderle, Daniel; Eppler, Jochen; Kremkow, Jens; Muller, Eilif; Pecevski, Dejan; Perrinet, Laurent; Yger, Pierre

    2008-01-01

    Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN. PMID:19194529

  15. PyNN: a common interface for neuronal network simulators

    Directory of Open Access Journals (Sweden)

    Andrew P Davison

    2009-01-01

    Full Text Available Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware. PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization, and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN.

  16. Review of feed forward neural network classification preprocessing techniques

    Science.gov (United States)

    Asadi, Roya; Kareem, Sameem Abdul

    2014-06-01

    The best feature of artificial intelligent Feed Forward Neural Network (FFNN) classification models is learning of input data through their weights. Data preprocessing and pre-training are the contributing factors in developing efficient techniques for low training time and high accuracy of classification. In this study, we investigate and review the powerful preprocessing functions of the FFNN models. Currently initialization of the weights is at random which is the main source of problems. Multilayer auto-encoder networks as the latest technique like other related techniques is unable to solve the problems. Weight Linear Analysis (WLA) is a combination of data pre-processing and pre-training to generate real weights through the use of normalized input values. The FFNN model by using the WLA increases classification accuracy and improve training time in a single epoch without any training cycle, the gradient of the mean square error function, updating the weights. The results of comparison and evaluation show that the WLA is a powerful technique in the FFNN classification area yet.

  17. Importance of simulation tools for the planning of optical network

    Science.gov (United States)

    Martins, Indayara B.; Martins, Yara; Rudge, Felipe; Moschimı, Edson

    2015-10-01

    The main proposal of this work is to show the importance of using simulation tools to project optical networks. The simulation method supports the investigation of several system and network parameters, such as bit error rate, blocking probability as well as physical layer issues, such as attenuation, dispersion, and nonlinearities, as these are all important to evaluate and validate the operability of optical networks. The work was divided into two parts: firstly, physical layer preplanning was proposed for the distribution of amplifiers and compensating for the attenuation and dispersion effects in span transmission; in this part, we also analyzed the quality of the transmitted signal. In the second part, an analysis of the transport layer was completed, proposing wavelength distribution planning, according to the total utilization of each link. The main network parameters used to evaluate the transport and physical layer design were delay (latency), blocking probability, and bit error rate (BER). This work was carried out with commercially available simulation tools.

  18. Evaluation of artificial neural network techniques for flow forecasting in the River Yangtze, China

    Directory of Open Access Journals (Sweden)

    C. W. Dawson

    2002-01-01

    Full Text Available While engineers have been quantifying rainfall-runoff processes since the mid-19th century, it is only in the last decade that artificial neural network models have been applied to the same task. This paper evaluates two neural networks in this context: the popular multilayer perceptron (MLP, and the radial basis function network (RBF. Using six-hourly rainfall-runoff data for the River Yangtze at Yichang (upstream of the Three Gorges Dam for the period 1991 to 1993, it is shown that both neural network types can simulate river flows beyond the range of the training set. In addition, an evaluation of alternative RBF transfer functions demonstrates that the popular Gaussian function, often used in RBF networks, is not necessarily the ‘best’ function to use for river flow forecasting. Comparisons are also made between these neural networks and conventional statistical techniques; stepwise multiple linear regression, auto regressive moving average models and a zero order forecasting approach. Keywords: Artificial neural network, multilayer perception, radial basis function, flood forecasting

  19. Simulating Social Networks of Online Communities: Simulation as a Method for Sociability Design

    Science.gov (United States)

    Ang, Chee Siang; Zaphiris, Panayiotis

    We propose the use of social simulations to study and support the design of online communities. In this paper, we developed an Agent-Based Model (ABM) to simulate and study the formation of social networks in a Massively Multiplayer Online Role Playing Game (MMORPG) guild community. We first analyzed the activities and the social network (who-interacts-with-whom) of an existing guild community to identify its interaction patterns and characteristics. Then, based on the empirical results, we derived and formalized the interaction rules, which were implemented in our simulation. Using the simulation, we reproduced the observed social network of the guild community as a means of validation. The simulation was then used to examine how various parameters of the community (e.g. the level of activity, the number of neighbors of each agent, etc) could potentially influence the characteristic of the social networks.

  20. A SIMULATION OF THE PENICILLIN G PRODUCTION BIOPROCESS APPLYING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    A.J.G. da Cruz

    1997-12-01

    Full Text Available The production of penicillin G by Penicillium chrysogenum IFO 8644 was simulated employing a feedforward neural network with three layers. The neural network training procedure used an algorithm combining two procedures: random search and backpropagation. The results of this approach were very promising, and it was observed that the neural network was able to accurately describe the nonlinear behavior of the process. Besides, the results showed that this technique can be successfully applied to control process algorithms due to its long processing time and its flexibility in the incorporation of new data

  1. Computational Aspects of Sensor Network Protocols (Distributed Sensor Network Simulator

    Directory of Open Access Journals (Sweden)

    Vasanth Iyer

    2009-08-01

    Full Text Available In this work, we model the sensor networks as an unsupervised learning and clustering process. We classify nodes according to its static distribution to form known class densities (CCPD. These densities are chosen from specific cross-layer features which maximizes lifetime of power-aware routing algorithms. To circumvent computational complexities of a power-ware communication STACK we introduce path-loss models at the nodes only for high density deployments. We study the cluster heads and formulate the data handling capacity for an expected deployment and use localized probability models to fuse the data with its side information before transmission. So each cluster head has a unique Pmax but not all cluster heads have the same measured value. In a lossless mode if there are no faults in the sensor network then we can show that the highest probability given by Pmax is ambiguous if its frequency is ≤ n/2 otherwise it can be determined by a local function. We further show that the event detection at the cluster heads can be modelled with a pattern 2m and m, the number of bits can be a correlated pattern of 2 bits and for a tight lower bound we use 3-bit Huffman codes which have entropy < 1. These local algorithms are further studied to optimize on power, fault detection and to maximize on the distributed routing algorithm used at the higher layers. From these bounds in large network, it is observed that the power dissipation is network size invariant. The performance of the routing algorithms solely based on success of finding healthy nodes in a large distribution. It is also observed that if the network size is kept constant and the density of the nodes is kept closer then the local pathloss model effects the performance of the routing algorithms. We also obtain the maximum intensity of transmitting nodes for a given category of routing algorithms for an outage constraint, i.e., the lifetime of sensor network.

  2. Impact of sensor installation techniques on seismic network performance

    Science.gov (United States)

    Bainbridge, Geoffrey; Laporte, Michael; Baturan, Dario; Greig, Wesley

    2015-04-01

    The magnitude of completeness (Mc) of a seismic network is determined by a number of factors including station density, self-noise and passband of the sensor used, ambient noise environment and sensor installation method and depth. Sensor installation techniques related to depth are of particular importance due to their impact on overall monitoring network deployment costs. We present a case study which evaluates performance of Trillium Compact Posthole seismometers installed using different methods as well as depths, and evaluate its impact on seismic network operation in terms of the target area of interest average magnitude of completeness in various monitoring applications. We evaluate three sensor installation methods: direct burial in soil at 0.5 m depth, 5 m screwpile and 15 m cemented casing borehole at sites chosen to represent high, medium and low ambient noise environments. In all cases, noise performance improves with depth with noise suppression generally more prominent at higher frequencies but with significant variations from site to site. When extended to overall network performance, the observed noise suppression results in improved (decreased) target area average Mc. However, the extent of the improvement with depth varies significantly, and can be negligible. The increased cost associated with installation at depth uses funds that could be applied to the deployment of additional stations. Using network modelling tools, we compare the improvement in magnitude of completeness and location accuracy associated with increasing installation depth to those associated with increased number of stations. The appropriate strategy is applied on a case-by-case and driven by network-specific performance requirements, deployment constraints and site noise conditions.

  3. Simulating public private networks as evolving systems

    NARCIS (Netherlands)

    Deljoo, A.; Janssen, M.F.W.H.A.; Klievink, A.J.

    2013-01-01

    Public-private service networks (PPSN) consist of social and technology components. Development of PPSN is ill-understood as these are dependent on a complex mix of interactions among stakeholders and their technologies and is influenced by contemporary developments. The aim of this paper is to

  4. Adaptive Importance Sampling Simulation of Queueing Networks

    NARCIS (Netherlands)

    de Boer, Pieter-Tjerk; Nicola, V.F.; Rubinstein, N.; Rubinstein, Reuven Y.

    2000-01-01

    In this paper, a method is presented for the efficient estimation of rare-event (overflow) probabilities in Jackson queueing networks using importance sampling. The method differs in two ways from methods discussed in most earlier literature: the change of measure is state-dependent, i.e., it is a

  5. Queueing networks : Rare events and fast simulations

    NARCIS (Netherlands)

    Miretskiy, D.I.

    2009-01-01

    This monograph focuses on rare events. Even though they are extremely unlikely, they can still occur and then could have significant consequences. We mainly consider rare events in queueing networks. More precisely, we are interested in the probability of collecting some large number of jobs in the

  6. Performance evaluation of an importance sampling technique in a Jackson network

    Science.gov (United States)

    brahim Mahdipour, E.; Masoud Rahmani, Amir; Setayeshi, Saeed

    2014-03-01

    Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network settings. Estimating probabilities associated with rare events has been a topic of great importance in queueing theory, and in applied probability at large. In this article, we analyse the performance of an importance sampling estimator for a rare event probability in a Jackson network. This article carries out strict deadlines to a two-node Jackson network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We have estimated the probability of network blocking for various sets of parameters, and also the probability of missing the deadline of customers for different loads and deadlines. We have finally shown that the probability of total population overflow may be affected by various deadline values, service rates and arrival rates.

  7. Etude et simulation des techniques de multiplexage OFDM pour une ...

    African Journals Online (AJOL)

    La simulation de ce modèle a révélé que, pour un même RSB de 20 dB, la technique ACO-OFDM (présentant un TEB de 0.0083) est moins sensible au bruit que la technique DCO-OFDM (dont le TEB est de 0.3413). Il est aussi remarqué que, pour un même RSB, l'implémentation de la DFT en matière de génération de ...

  8. A Soft Technique for Measuring Friction Force Using Neural Network

    Directory of Open Access Journals (Sweden)

    Sunan HUANG

    2011-10-01

    Full Text Available There are two approaches to measure a friction force: force sensor, software estimation algorithm. This paper will focus on software approach to measure friction. The proposed approach uses a neural network (NN to approximate the friction force in a mechanical system. Since the friction force considered is a speed-dependent function, a learning algorithm is adopted to update the NN weights so as to follow unknown friction behaviors. The advantage of the proposed friction estimation method is that it is based on the built NN model, and it does not require the force sensor measurement. Simulation test is given to verify the effectiveness of the proposed approach.

  9. Comparison of Available Bandwidth Estimation Techniques in Packet-Switched Mobile Networks

    DEFF Research Database (Denmark)

    López Villa, Dimas; Ubeda Castellanos, Carlos; Teyeb, Oumer Mohammed

    2006-01-01

    of information regarding the available bandwidth in the transport network, as it could end up being the bottleneck rather than the air interface. This paper provides a comparative study of three well known available bandwidth estimation techniques, i.e. TOPP, SLoPS and pathChirp, taking into account......The relative contribution of the transport network towards the per-user capacity in mobile telecommunication systems is becoming very important due to the ever increasing air-interface data rates. Thus, resource management procedures such as admission, load and handover control can make use...... the statistical conditions of the available bandwidth and assessing the variability of their estimations. Simulation-based studies on a mobile transport network show that pathChirp outperforms TOPP and SLoPS, both in terms of accuracy and efficiency....

  10. Energy neutral protocol based on hierarchical routing techniques for energy harvesting wireless sensor network

    Science.gov (United States)

    Muhammad, Umar B.; Ezugwu, Absalom E.; Ofem, Paulinus O.; Rajamäki, Jyri; Aderemi, Adewumi O.

    2017-06-01

    Recently, researchers in the field of wireless sensor networks have resorted to energy harvesting techniques that allows energy to be harvested from the ambient environment to power sensor nodes. Using such Energy harvesting techniques together with proper routing protocols, an Energy Neutral state can be achieved so that sensor nodes can run perpetually. In this paper, we propose an Energy Neutral LEACH routing protocol which is an extension to the traditional LEACH protocol. The goal of the proposed protocol is to use Gateway node in each cluster so as to reduce the data transmission ranges of cluster head nodes. Simulation results show that the proposed routing protocol achieves a higher throughput and ensure the energy neutral status of the entire network.

  11. Next-Generation Environment-Aware Cellular Networks: Modern Green Techniques and Implementation Challenges

    KAUST Repository

    Ghazzai, Hakim

    2016-09-16

    Over the last decade, mobile communications have been witnessing a noteworthy increase of data traffic demand that is causing an enormous energy consumption in cellular networks. The reduction of their fossil fuel consumption in addition to the huge energy bills paid by mobile operators is considered as the most important challenges for the next-generation cellular networks. Although most of the proposed studies were focusing on individual physical layer power optimizations, there is a growing necessity to meet the green objective of fifth-generation cellular networks while respecting the user\\'s quality of service. This paper investigates four important techniques that could be exploited separately or together in order to enable wireless operators achieve significant economic benefits and environmental savings: 1) the base station sleeping strategy; 2) the optimized energy procurement from the smart grid; 3) the base station energy sharing; and 4) the green networking collaboration between competitive mobile operators. The presented simulation results measure the gain that could be obtained using these techniques compared with that of traditional scenarios. Finally, this paper discusses the issues and challenges related to the implementations of these techniques in real environments. © 2016 IEEE.

  12. Simulating activation propagation in social networks using the graph theory

    Directory of Open Access Journals (Sweden)

    František Dařena

    2010-01-01

    Full Text Available The social-network formation and analysis is nowadays one of objects that are in a focus of intensive research. The objective of the paper is to suggest the perspective of representing social networks as graphs, with the application of the graph theory to problems connected with studying the network-like structures and to study spreading activation algorithm for reasons of analyzing these structures. The paper presents the process of modeling multidimensional networks by means of directed graphs with several characteristics. The paper also demonstrates using Spreading Activation algorithm as a good method for analyzing multidimensional network with the main focus on recommender systems. The experiments showed that the choice of parameters of the algorithm is crucial, that some kind of constraint should be included and that the algorithm is able to provide a stable environment for simulations with networks.

  13. Dataflow Integration and Simulation Techniques for DSP System Design Tools

    Science.gov (United States)

    2007-01-01

    tations for PDSPs or other types of embedded processors, or for Verilog/ VHDL implementations on FPGAs. 3 Figure 1.1: Overview of DSP system design...ABSTRACT Title of dissertation: DATAFLOW INTEGRATION AND SIMULATION TECHNIQUES FOR DSP SYSTEM DESIGN TOOLS Chia-Jui Hsu Doctor of Philosophy, 2007...synthesis using dataflow models of computation are widespread in electronic design automation (EDA) tools for digi- tal signal processing ( DSP ) systems

  14. The use of visual interactive simulation techniques for production scheduling

    Directory of Open Access Journals (Sweden)

    W.H. Swan

    2003-12-01

    Full Text Available During the last decade visual interactive simulation has become established as a useful new tool for solving real life problems. It offers the Operational Research professional the opportunity to impact beneficially on important new decision making areas of business and industry. As an example, this paper discusses its application to the scheduling of production on batch chemical plants, which to date has remained largely a manual activity. Two different approaches are introduced, and it is concluded that while discrete event simulation is most useful as an aid to learning at a time of change, bar chart simulation is preferred for the day to day scheduling. The technique has been implemented on a number of plants and has led to significant improvements in their performance. Some areas for further development are identified.

  15. Dispersion analysis techniques within the space vehicle dynamics simulation program

    Science.gov (United States)

    Snow, L. S.; Kuhn, A. E.

    1975-01-01

    The Space Vehicle Dynamics Simulation (SVDS) program was evaluated as a dispersion analysis tool. The Linear Error Analysis (LEA) post processor was examined in detail and simulation techniques relative to conducting a dispersion analysis using the SVDS were considered. The LEA processor is a tool for correlating trajectory dispersion data developed by simulating 3 sigma uncertainties as single error source cases. The processor combines trajectory and performance deviations by a root-sum-square (RSS process) and develops a covariance matrix for the deviations. Results are used in dispersion analyses for the baseline reference and orbiter flight test missions. As a part of this study, LEA results were verified as follows: (A) Hand calculating the RSS data and the elements of the covariance matrix for comparison with the LEA processor computed data. (B) Comparing results with previous error analyses. The LEA comparisons and verification are made at main engine cutoff (MECO).

  16. Integrated Circuit For Simulation Of Neural Network

    Science.gov (United States)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.; Khanna, Satish K.

    1988-01-01

    Ballast resistors deposited on top of circuit structure. Cascadable, programmable binary connection matrix fabricated in VLSI form as basic building block for assembly of like units into content-addressable electronic memory matrices operating somewhat like networks of neurons. Connections formed during storage of data, and data recalled from memory by prompting matrix with approximate or partly erroneous signals. Redundancy in pattern of connections causes matrix to respond with correct stored data.

  17. Performance of artificial neural networks and genetical evolved artificial neural networks unfolding techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M. [Escuela Politecnica Superior, Departamento de Electrotecnia y Electronica, Avda. Menendez Pidal s/n, Cordoba (Spain); Martinez B, M. R.; Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Calle Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Gallego D, E.; Lorente F, A. [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, ETSI Industriales, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Mendez V, R.; Los Arcos M, J. M.; Guerrero A, J. E., E-mail: morvymm@yahoo.com.m [CIEMAT, Laboratorio de Metrologia de Radiaciones Ionizantes, Avda. Complutense 22, 28040 Madrid (Spain)

    2011-02-15

    With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks (Ann) have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Ann still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning Ann parameters. In recent years the use of hybrid technologies, combining Ann and genetic algorithms, has been utilized to. In this work, several Ann topologies were trained and tested using Ann and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out. (Author)

  18. Software for Brain Network Simulations: A Comparative Study

    Science.gov (United States)

    Tikidji-Hamburyan, Ruben A.; Narayana, Vikram; Bozkus, Zeki; El-Ghazawi, Tarek A.

    2017-01-01

    Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with

  19. Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks.

    Science.gov (United States)

    Wang, Zhijun; Mirdamadi, Reza; Wang, Qing

    2016-01-01

    Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building.

  20. Display techniques for dynamic network data in transportation GIS

    Energy Technology Data Exchange (ETDEWEB)

    Ganter, J.H.; Cashwell, J.W.

    1994-05-01

    Interest in the characteristics of urban street networks is increasing at the same time new monitoring technologies are delivering detailed traffic data. These emerging streams of data may lead to the dilemma that airborne remote sensing has faced: how to select and access the data, and what meaning is hidden in them? computer-assisted visualization techniques are needed to portray these dynamic data. Of equal importance are controls that let the user filter, symbolize, and replay the data to reveal patterns and trends over varying time spans. We discuss a prototype software system that addresses these requirements.

  1. Advancing botnet modeling techniques for military and security simulations

    Science.gov (United States)

    Banks, Sheila B.; Stytz, Martin R.

    2011-06-01

    Simulation environments serve many purposes, but they are only as good as their content. One of the most challenging and pressing areas that call for improved content is the simulation of bot armies (botnets) and their effects upon networks and computer systems. Botnets are a new type of malware, a type that is more powerful and potentially dangerous than any other type of malware. A botnet's power derives from several capabilities including the following: 1) the botnet's capability to be controlled and directed throughout all phases of its activity, 2) a command and control structure that grows increasingly sophisticated, and 3) the ability of a bot's software to be updated at any time by the owner of the bot (a person commonly called a bot master or bot herder.) Not only is a bot army powerful and agile in its technical capabilities, a bot army can be extremely large, can be comprised of tens of thousands, if not millions, of compromised computers or it can be as small as a few thousand targeted systems. In all botnets, their members can surreptitiously communicate with each other and their command and control centers. In sum, these capabilities allow a bot army to execute attacks that are technically sophisticated, difficult to trace, tactically agile, massive, and coordinated. To improve our understanding of their operation and potential, we believe that it is necessary to develop computer security simulations that accurately portray bot army activities, with the goal of including bot army simulations within military simulation environments. In this paper, we investigate issues that arise when simulating bot armies and propose a combination of the biologically inspired MSEIR infection spread model coupled with the jump-diffusion infection spread model to portray botnet propagation.

  2. HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks

    Directory of Open Access Journals (Sweden)

    Luca Marchetti

    2017-01-01

    Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.

  3. Simulation of wind turbine wakes using the actuator line technique

    Science.gov (United States)

    Sørensen, Jens N.; Mikkelsen, Robert F.; Henningson, Dan S.; Ivanell, Stefan; Sarmast, Sasan; Andersen, Søren J.

    2015-01-01

    The actuator line technique was introduced as a numerical tool to be employed in combination with large eddy simulations to enable the study of wakes and wake interaction in wind farms. The technique is today largely used for studying basic features of wakes as well as for making performance predictions of wind farms. In this paper, we give a short introduction to the wake problem and the actuator line methodology and present a study in which the technique is employed to determine the near-wake properties of wind turbines. The presented results include a comparison of experimental results of the wake characteristics of the flow around a three-bladed model wind turbine, the development of a simple analytical formula for determining the near-wake length behind a wind turbine and a detailed investigation of wake structures based on proper orthogonal decomposition analysis of numerically generated snapshots of the wake. PMID:25583862

  4. System Identification, Prediction, Simulation and Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    a Gauss-Newton search direction is applied. 3) Amongst numerous model types, often met in control applications, only the Non-linear ARMAX (NARMAX) model, representing input/output description, is examined. A simulated example confirms that a neural network has the potential to perform excellent System...... Identification, Prediction, Simulation and Control of a dynamic, non-linear and noisy process. Further, the difficulties to control a practical non-linear laboratory process in a satisfactory way by using a traditional controller are overcomed by using a trained neural network to perform non-linear System......The intention of this paper is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...

  5. Network bursts in cortical neuronal cultures: 'noise - versus pacemaker'- driven neural network simulations

    NARCIS (Netherlands)

    Gritsun, T.; Stegenga, J.; le Feber, Jakob; Rutten, Wim

    2009-01-01

    In this paper we address the issue of spontaneous bursting activity in cortical neuronal cultures and explain what might cause this collective behavior using computer simulations of two different neural network models. While the common approach to acivate a passive network is done by introducing

  6. Dynamic Interactions for Network Visualization and Simulation

    Science.gov (United States)

    2009-03-01

    Unmanned Aerial Vehicle . . . . . . . . . . . . . . . . . . 7 GUI Graphical User Interface . . . . . . . . . . . . . . . . . . . 8 MVC Model-View...applications, and web applets. Comprising a library of design algorithms, navigation and interaction techniques, prefuse aims to significantly sim- plify the...Information Visualization Reference Model of the Prefuse toolkit [15]. The prefuse toolkit is suitable for the Model-View-Controller ( MVC ) [15] soft- ware

  7. Stochastic Simulation of Biomolecular Networks in Dynamic Environments.

    Science.gov (United States)

    Voliotis, Margaritis; Thomas, Philipp; Grima, Ramon; Bowsher, Clive G

    2016-06-01

    Simulation of biomolecular networks is now indispensable for studying biological systems, from small reaction networks to large ensembles of cells. Here we present a novel approach for stochastic simulation of networks embedded in the dynamic environment of the cell and its surroundings. We thus sample trajectories of the stochastic process described by the chemical master equation with time-varying propensities. A comparative analysis shows that existing approaches can either fail dramatically, or else can impose impractical computational burdens due to numerical integration of reaction propensities, especially when cell ensembles are studied. Here we introduce the Extrande method which, given a simulated time course of dynamic network inputs, provides a conditionally exact and several orders-of-magnitude faster simulation solution. The new approach makes it feasible to demonstrate-using decision-making by a large population of quorum sensing bacteria-that robustness to fluctuations from upstream signaling places strong constraints on the design of networks determining cell fate. Our approach has the potential to significantly advance both understanding of molecular systems biology and design of synthetic circuits.

  8. Stochastic Simulation of Biomolecular Networks in Dynamic Environments.

    Directory of Open Access Journals (Sweden)

    Margaritis Voliotis

    2016-06-01

    Full Text Available Simulation of biomolecular networks is now indispensable for studying biological systems, from small reaction networks to large ensembles of cells. Here we present a novel approach for stochastic simulation of networks embedded in the dynamic environment of the cell and its surroundings. We thus sample trajectories of the stochastic process described by the chemical master equation with time-varying propensities. A comparative analysis shows that existing approaches can either fail dramatically, or else can impose impractical computational burdens due to numerical integration of reaction propensities, especially when cell ensembles are studied. Here we introduce the Extrande method which, given a simulated time course of dynamic network inputs, provides a conditionally exact and several orders-of-magnitude faster simulation solution. The new approach makes it feasible to demonstrate-using decision-making by a large population of quorum sensing bacteria-that robustness to fluctuations from upstream signaling places strong constraints on the design of networks determining cell fate. Our approach has the potential to significantly advance both understanding of molecular systems biology and design of synthetic circuits.

  9. SELANSI: a toolbox for Simulation of Stochastic Gene Regulatory Networks.

    Science.gov (United States)

    Pájaro, Manuel; Otero-Muras, Irene; Vázquez, Carlos; Alonso, Antonio A

    2017-10-11

    Gene regulation is inherently stochastic. In many applications concerning Systems and Synthetic Biology such as the reverse engineering and the de novo design of genetic circuits, stochastic effects (yet potentially crucial) are often neglected due to the high computational cost of stochastic simulations. With advances in these fields there is an increasing need of tools providing accurate approximations of the stochastic dynamics of gene regulatory networks (GRNs) with reduced computational effort. This work presents SELANSI (SEmi-LAgrangian SImulation of GRNs), a software toolbox for the simulation of stochastic multidimensional gene regulatory networks. SELANSI exploits intrinsic structural properties of gene regulatory networks to accurately approximate the corresponding chemical master equation (CME) with a partial integral differential equation (PIDE) that is solved by a semi-lagrangian method with high efficiency. Networks under consideration might involve multiple genes with self and cross regulations, in which genes can be regulated by different transcription factors. Moreover, the validity of the method is not restricted to a particular type of kinetics. The tool offers total flexibility regarding network topology, kinetics and parameterization, as well as simulation options. SELANSI runs under the MATLAB environment, and is available under GPLv3 license at https://sites.google.com/view/selansi. antonio@iim.csic.es.

  10. Simulated, Emulated, and Physical Investigative Analysis (SEPIA) of networked systems.

    Energy Technology Data Exchange (ETDEWEB)

    Burton, David P.; Van Leeuwen, Brian P.; McDonald, Michael James; Onunkwo, Uzoma A.; Tarman, Thomas David; Urias, Vincent E.

    2009-09-01

    This report describes recent progress made in developing and utilizing hybrid Simulated, Emulated, and Physical Investigative Analysis (SEPIA) environments. Many organizations require advanced tools to analyze their information system's security, reliability, and resilience against cyber attack. Today's security analysis utilize real systems such as computers, network routers and other network equipment, computer emulations (e.g., virtual machines) and simulation models separately to analyze interplay between threats and safeguards. In contrast, this work developed new methods to combine these three approaches to provide integrated hybrid SEPIA environments. Our SEPIA environments enable an analyst to rapidly configure hybrid environments to pass network traffic and perform, from the outside, like real networks. This provides higher fidelity representations of key network nodes while still leveraging the scalability and cost advantages of simulation tools. The result is to rapidly produce large yet relatively low-cost multi-fidelity SEPIA networks of computers and routers that let analysts quickly investigate threats and test protection approaches.

  11. Regularization Techniques to Overcome Overparameterization of Complex Biochemical Reaction Networks.

    Science.gov (United States)

    Howsmon, Daniel P; Hahn, Juergen

    2016-09-01

    Models of biochemical reaction networks commonly contain a large number of parameters while at the same time there is only a limited amount of (noisy) data available for their estimation. As such, the values of many parameters are not well known as nominal parameter values have to be determined from the open scientific literature and a significant number of the values may have been derived in different cell types or organisms than that which is modeled. There clearly is a need to estimate at least some of the parameter values from experimental data, however, the small amount of available data and the large number of parameters commonly found in these types of models, require the use of regularization techniques to avoid over fitting. A tutorial of regularization techniques, including parameter set selection, precedes a case study of estimating parameters in a signal transduction network. Cross validation rather than fitting results are presented to further emphasize the need for models that generalize well to new data instead of simply fitting the current data.

  12. Social Network Mixing Patterns In Mergers & Acquisitions - A Simulation Experiment

    Directory of Open Access Journals (Sweden)

    Robert Fabac

    2011-01-01

    Full Text Available In the contemporary world of global business and continuously growing competition, organizations tend to use mergers and acquisitions to enforce their position on the market. The future organization’s design is a critical success factor in such undertakings. The field of social network analysis can enhance our uderstanding of these processes as it lets us reason about the development of networks, regardless of their origin. The analysis of mixing patterns is particularly useful as it provides an insight into how nodes in a network connect with each other. We hypothesize that organizational networks with compatible mixing patterns will be integrated more successfully. After conducting a simulation experiment, we suggest an integration model based on the analysis of network assortativity. The model can be a guideline for organizational integration, such as occurs in mergers and acquisitions.

  13. Simulation and data reconstruction for NDT phased array techniques.

    Science.gov (United States)

    Chatillon, S; de Roumilly, L; Porre, J; Poidevin, C; Calmon, P

    2006-12-22

    Phased array techniques are now widely employed for industrial NDT applications in various contexts. Indeed, phased array present a great adaptability to the inspection configuration and the application of suitable delay laws allows to optimize the detection and characterization performances by taking into account the component geometry, the material characteristics, and the aim of the inspection. In addition, the amount of potential information issued from the inspection is in general greatly enhanced. It is the case when the employed method involve sequences of shots (sectorial scanning, multiple depth focusing etc) or when signals received on the different channels are stored. At last, application of electronic commutation make possible higher acquisition rates. Accompanying these advantages, it is clear that an optimal use of such techniques require the application of simulation-based algorithms at the different stages of the inspection process: When designing the probe by optimizing number and characteristics of element; When conceiving the inspection method by selecting suitable sequences of shots, computing optimized delay laws and evaluating the performances of the control in terms of zone coverage or flaw detection capabilities; When analysing the results by applying simulation-helped visualization and data reconstruction algorithms. For many years the CEA (French Atomic Energy Commission) has been being greatly involved in the development of such phased arrays simulation-based tools. In this paper, we will present recent advances of this activity and show different examples of application carried out on complex situations.

  14. D Digital Simulation of Minnan Temple Architecture CAISSON'S Craft Techniques

    Science.gov (United States)

    Lin, Y. C.; Wu, T. C.; Hsu, M. F.

    2013-07-01

    Caisson is one of the important representations of the Minnan (southern Fujian) temple architecture craft techniques and decorative aesthetics. The special component design and group building method present the architectural thinking and personal characteristics of great carpenters of Minnan temple architecture. In late Qing Dynasty, the appearance and style of caissons of famous temples in Taiwan apparently presented the building techniques of the great carpenters. However, as the years went by, the caisson design and craft techniques were not fully inherited, which has been a great loss of cultural assets. Accordingly, with the caisson of Fulong temple, a work by the well-known great carpenter in Tainan as an example, this study obtained the thinking principles of the original design and the design method at initial period of construction through interview records and the step of redrawing the "Tng-Ko" (traditional design, stakeout and construction tool). We obtained the 3D point cloud model of the caisson of Fulong temple using 3D laser scanning technology, and established the 3D digital model of each component of the caisson. Based on the caisson component procedure obtained from interview records, this study conducted the digital simulation of the caisson component to completely recode and present the caisson design, construction and completion procedure. This model of preserving the craft techniques for Minnan temple caisson by using digital technology makes specific contribution to the heritage of the craft techniques while providing an important reference for the digital preservation of human cultural assets.

  15. In silico Biochemical Reaction Network Analysis (IBRENA): a package for simulation and analysis of reaction networks.

    Science.gov (United States)

    Liu, Gang; Neelamegham, Sriram

    2008-04-15

    We present In silico Biochemical Reaction Network Analysis (IBRENA), a software package which facilitates multiple functions including cellular reaction network simulation and sensitivity analysis (both forward and adjoint methods), coupled with principal component analysis, singular-value decomposition and model reduction. The software features a graphical user interface that aids simulation and plotting of in silico results. While the primary focus is to aid formulation, testing and reduction of theoretical biochemical reaction networks, the program can also be used for analysis of high-throughput genomic and proteomic data. The software package, manual and examples are available at http://www.eng.buffalo.edu/~neel/ibrena

  16. Aggregated Representation of Distribution Networks for Large-Scale Transmission Network Simulations

    DEFF Research Database (Denmark)

    Göksu, Ömer; Altin, Müfit; Sørensen, Poul Ejnar

    2014-01-01

    As a common practice of large-scale transmission network analysis the distribution networks have been represented as aggregated loads. However, with increasing share of distributed generation, especially wind and solar power, in the distribution networks, it became necessary to include the distri......As a common practice of large-scale transmission network analysis the distribution networks have been represented as aggregated loads. However, with increasing share of distributed generation, especially wind and solar power, in the distribution networks, it became necessary to include...... the distributed generation within those analysis. In this paper a practical methodology to obtain aggregated behaviour of the distributed generation is proposed. The methodology, which is based on the use of the IEC standard wind turbine models, is applied on a benchmark distribution network via simulations....

  17. Artificial Neural Network Metamodels of Stochastic Computer Simulations

    Science.gov (United States)

    1994-08-10

    23 Haddock, J. and O’Keefe, R., "Using Artificial Intelligence to Facilitate Manufacturing Systems Simulation," Computers & Industrial Engineering , Vol...Feedforward Neural Networks," Computers & Industrial Engineering , Vol. 21, No. 1- 4, (1991), pp. 247-251. 87 Proceedings of the 1992 Summer Computer...Using Simulation Experiments," Computers & Industrial Engineering , Vol. 22, No. 2 (1992), pp. 195-209. 119 Kuei, C. and Madu, C., "Polynomial

  18. X-ray optics simulation using Gaussian superposition technique.

    Science.gov (United States)

    Idir, Mourad; Cywiak, Moisés; Morales, Arquímedes; Modi, Mohammed H

    2011-09-26

    We present an efficient method to perform x-ray optics simulation with high or partially coherent x-ray sources using Gaussian superposition technique. In a previous paper, we have demonstrated that full characterization of optical systems, diffractive and geometric, is possible by using the Fresnel Gaussian Shape Invariant (FGSI) previously reported in the literature. The complex amplitude distribution in the object plane is represented by a linear superposition of complex Gaussians wavelets and then propagated through the optical system by means of the referred Gaussian invariant. This allows ray tracing through the optical system and at the same time allows calculating with high precision the complex wave-amplitude distribution at any plane of observation. This technique can be applied in a wide spectral range where the Fresnel diffraction integral applies including visible, x-rays, acoustic waves, etc. We describe the technique and include some computer simulations as illustrative examples for x-ray optical component. We show also that this method can be used to study partial or total coherence illumination problem. © 2011 Optical Society of America

  19. Distributed Synchronization Technique for OFDMA-Based Wireless Mesh Networks Using a Bio-Inspired Algorithm.

    Science.gov (United States)

    Kim, Mi Jeong; Maeng, Sung Joon; Cho, Yong Soo

    2015-07-28

    In this paper, a distributed synchronization technique based on a bio-inspired algorithm is proposed for an orthogonal frequency division multiple access (OFDMA)-based wireless mesh network (WMN) with a time difference of arrival. The proposed time- and frequency-synchronization technique uses only the signals received from the neighbor nodes, by considering the effect of the propagation delay between the nodes. It achieves a fast synchronization with a relatively low computational complexity because it is operated in a distributed manner, not requiring any feedback channel for the compensation of the propagation delays. In addition, a self-organization scheme that can be effectively used to construct 1-hop neighbor nodes is proposed for an OFDMA-based WMN with a large number of nodes. The performance of the proposed technique is evaluated with regard to the convergence property and synchronization success probability using a computer simulation.

  20. Multiscale methodology for bone remodelling simulation using coupled finite element and neural network computation.

    Science.gov (United States)

    Hambli, Ridha; Katerchi, Houda; Benhamou, Claude-Laurent

    2011-02-01

    The aim of this paper is to develop a multiscale hierarchical hybrid model based on finite element analysis and neural network computation to link mesoscopic scale (trabecular network level) and macroscopic (whole bone level) to simulate the process of bone remodelling. As whole bone simulation, including the 3D reconstruction of trabecular level bone, is time consuming, finite element calculation is only performed at the macroscopic level, whilst trained neural networks are employed as numerical substitutes for the finite element code needed for the mesoscale prediction. The bone mechanical properties are updated at the macroscopic scale depending on the morphological and mechanical adaptation at the mesoscopic scale computed by the trained neural network. The digital image-based modelling technique using μ-CT and voxel finite element analysis is used to capture volume elements representative of 2 mm³ at the mesoscale level of the femoral head. The input data for the artificial neural network are a set of bone material parameters, boundary conditions and the applied stress. The output data are the updated bone properties and some trabecular bone factors. The current approach is the first model, to our knowledge, that incorporates both finite element analysis and neural network computation to rapidly simulate multilevel bone adaptation.

  1. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  2. Numerical simulation of electron beam welding and instrumental technique

    Energy Technology Data Exchange (ETDEWEB)

    Carin, M.; Rogeon, P.; Carron, D.; Le Masson, P.; Couedel, D. [Universite de Bretagne Sud, Centre de Recherche, Lab. d' Etudes Thermiques Energetique et Environnement, 56 - Lorient (France)

    2004-07-01

    In the present work, thermal cycles measured with thermocouples embedded in specimens are employed to validate a numerical thermo-metallurgical model of an Electron Beam welding process. The implemented instrumentation techniques aim at reducing the perturbations induced by the sensors in place. The numerical model is based on the definition of a heat source term linked to the keyhole geometry predicted by a model of pressure balance using the FEMLAB code. The heat source term is used by the thermo-metallurgical simulation carried out with the finite element code SYSWELD. Kinetics parameters are extracted from dilatometric experiments achieved in welding austenitization conditions at constant cooling rates. (authors)

  3. Method of construction of rational corporate network using the simulation model

    Directory of Open Access Journals (Sweden)

    V.N. Pakhomovа

    2013-06-01

    Full Text Available Purpose. Search for new options of the transition from Ethernet technology. Methodology. Physical structuring of the Fast Ethernet network based on hubs and logical structuring of Fast Ethernet network using commutators. Organization of VLAN based on ports grouping and in accordance with the standard IEEE 802 .1Q. Findings. The options for improving of the Ethernet network are proposed. According to the Fast Ethernet and VLAN technologies on the simulation models in packages NetCraker and Cisco Packet Traker respectively. Origiality. The technique of designing of local area network using the VLAN technology is proposed. Practical value.Each of the options of "Dniprozaliznychproekt" network improving has its advantages. Transition from the Ethernet to Fast Ethernet technology is simple and economical, it requires only one commutator, when the VLAN organization requires at least two. VLAN technology, however, has the following advantages: reducing the load on the network, isolation of the broadcast traffic, change of the logical network structure without changing its physical structure, improving the network security. The transition from Ethernet to the VLAN technology allows you to separate the physical topology from the logical one, and the format of the ÌEEE 802.1Q standard frames allows you to simplify the process of virtual networks implementation to enterprises.

  4. A Neural Network Model for Dynamics Simulation | Bholoa ...

    African Journals Online (AJOL)

    University of Mauritius Research Journal. Journal Home · ABOUT · Advanced Search · Current Issue · Archives · Journal Home > Vol 15, No 1 (2009) >. Log in or Register to get access to full text downloads. Username, Password, Remember me, or Register. A Neural Network Model for Dynamics Simulation. Ajeevsing ...

  5. Fracture Network Modeling and GoldSim Simulation Support

    OpenAIRE

    杉田 健一郎; Dershowiz, W.

    2003-01-01

    During Heisei-14, Golder Associates provided support for JNC Tokai through data analysis and simulation of the MIU Underground Rock Laboratory, participation in Task 6 of the Aspo Task Force on Modelling of Groundwater Flow and Transport, and analysis of repository safety assessment technologies including cell networks for evaluation of the disturbed rock zone (DRZ) and total systems performance assessment (TSPA).

  6. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    Energy Technology Data Exchange (ETDEWEB)

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow

  7. Distributed Sensor Network Software Development Testing through Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Brennan, Sean M. [Univ. of New Mexico, Albuquerque, NM (United States)

    2003-12-01

    The distributed sensor network (DSN) presents a novel and highly complex computing platform with dif culties and opportunities that are just beginning to be explored. The potential of sensor networks extends from monitoring for threat reduction, to conducting instant and remote inventories, to ecological surveys. Developing and testing for robust and scalable applications is currently practiced almost exclusively in hardware. The Distributed Sensors Simulator (DSS) is an infrastructure that allows the user to debug and test software for DSNs independent of hardware constraints. The exibility of DSS allows developers and researchers to investigate topological, phenomenological, networking, robustness and scaling issues, to explore arbitrary algorithms for distributed sensors, and to defeat those algorithms through simulated failure. The user speci es the topology, the environment, the application, and any number of arbitrary failures; DSS provides the virtual environmental embedding.

  8. Network module detection: Affinity search technique with the multi-node topological overlap measure

    Directory of Open Access Journals (Sweden)

    Horvath Steve

    2009-07-01

    Full Text Available Abstract Background Many clustering procedures only allow the user to input a pairwise dissimilarity or distance measure between objects. We propose a clustering method that can input a multi-point dissimilarity measure d(i1, i2, ..., iP where the number of points P can be larger than 2. The work is motivated by gene network analysis where clusters correspond to modules of highly interconnected nodes. Here, we define modules as clusters of network nodes with high multi-node topological overlap. The topological overlap measure is a robust measure of interconnectedness which is based on shared network neighbors. In previous work, we have shown that the multi-node topological overlap measure yields biologically meaningful results when used as input of network neighborhood analysis. Findings We adapt network neighborhood analysis for the use of module detection. We propose the Module Affinity Search Technique (MAST, which is a generalized version of the Cluster Affinity Search Technique (CAST. MAST can accommodate a multi-node dissimilarity measure. Clusters grow around user-defined or automatically chosen seeds (e.g. hub nodes. We propose both local and global cluster growth stopping rules. We use several simulations and a gene co-expression network application to argue that the MAST approach leads to biologically meaningful results. We compare MAST with hierarchical clustering and partitioning around medoid clustering. Conclusion Our flexible module detection method is implemented in the MTOM software which can be downloaded from the following webpage: http://www.genetics.ucla.edu/labs/horvath/MTOM/

  9. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  10. Simulation of Attacks for Security in Wireless Sensor Network.

    Science.gov (United States)

    Diaz, Alvaro; Sanchez, Pablo

    2016-11-18

    The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node's software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work.

  11. Simulated annealing technique to design minimum cost exchanger

    Directory of Open Access Journals (Sweden)

    Khalfe Nadeem M.

    2011-01-01

    Full Text Available Owing to the wide utilization of heat exchangers in industrial processes, their cost minimization is an important target for both designers and users. Traditional design approaches are based on iterative procedures which gradually change the design and geometric parameters to satisfy a given heat duty and constraints. Although well proven, this kind of approach is time consuming and may not lead to cost effective design as no cost criteria are explicitly accounted for. The present study explores the use of nontraditional optimization technique: called simulated annealing (SA, for design optimization of shell and tube heat exchangers from economic point of view. The optimization procedure involves the selection of the major geometric parameters such as tube diameters, tube length, baffle spacing, number of tube passes, tube layout, type of head, baffle cut etc and minimization of total annual cost is considered as design target. The presented simulated annealing technique is simple in concept, few in parameters and easy for implementations. Furthermore, the SA algorithm explores the good quality solutions quickly, giving the designer more degrees of freedom in the final choice with respect to traditional methods. The methodology takes into account the geometric and operational constraints typically recommended by design codes. Three different case studies are presented to demonstrate the effectiveness and accuracy of proposed algorithm. The SA approach is able to reduce the total cost of heat exchanger as compare to cost obtained by previously reported GA approach.

  12. Simulation of Two High Pressure Distribution Network Operation in one-Network Connection

    Directory of Open Access Journals (Sweden)

    Perju Sorin

    2014-09-01

    Full Text Available The programs developed by the water supply system operators in view of metering the branches and reducing the potable water losses from the distribution network pipes lead to the performance reassessment of these networks. As a result the energetic consumption of the pumping stations should meet the accepted limits. An essential role in the evaluation of the operation parameters of the network performance is played by hydraulic modeling, by means of which the network performance simulation can be done in different scenarios. The present article describes the concept of two high-pressure network coupling. These networks are supplied by two repumping stations, in which the water flows were drastically reduced due to the present situation

  13. Calibration Technique of the Irradiated Thermocouple using Artificial Neural Network

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Jin Tae; Joung, Chang Young; Ahn, Sung Ho; Yang, Tae Ho; Heo, Sung Ho; Jang, Seo Yoon [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    To correct the signals, the degradation rate of sensors needs to be analyzed, and re-calibration of sensors should be followed periodically. In particular, because thermocouples instrumented in the nuclear fuel rod are degraded owing to the high neutron fluence generated from the nuclear fuel, the periodic re-calibration process is necessary. However, despite the re-calibration of the thermocouple, the measurement error will be increased until next re-calibration. In this study, based on the periodically calibrated temperature - voltage data, an interpolation technique using the artificial neural network will be introduced to minimize the calibration error of the C-type thermocouple under the irradiation test. The test result shows that the calculated voltages derived from the interpolation function have good agreement with the experimental sampling data, and they also accurately interpolate the voltages at arbitrary temperature and neutron fluence. That is, once the reference data is obtained by experiments, it is possible to accurately calibrate the voltage signal at a certain neutron fluence and temperature using an artificial neural network.

  14. Socialising Health Burden Through Different Network Topologies: A Simulation Study.

    Science.gov (United States)

    Peacock, Adrian; Cheung, Anthony; Kim, Peter; Poon, Simon K

    2017-01-01

    An aging population and the expectation of premium quality health services combined with the increasing economic burden of the healthcare system requires a paradigm shift toward patient oriented healthcare. The guardian angel theory described by Szolovits [1] explores the notion of enlisting patients as primary providers of information and motivation to patients with similar clinical history through social connections. In this study, an agent based model was developed to simulate to explore how individuals are affected through their levels of intrinsic positivity. Ring, point-to-point (paired buddy), and random networks were modelled, with individuals able to send messages to each other given their levels of variables positivity and motivation. Of the 3 modelled networks it is apparent that the ring network provides the most equal, collective improvement in positivity and motivation for all users. Further study into other network topologies should be undertaken in the future.

  15. Molecular Simulations of Actomyosin Network Self-Assembly and Remodeling

    Science.gov (United States)

    Komianos, James; Popov, Konstantin; Papoian, Garegin; Papoian Lab Team

    Actomyosin networks are an integral part of the cytoskeleton of eukaryotic cells and play an essential role in determining cellular shape and movement. Actomyosin network growth and remodeling in vivo is based on a large number of chemical and mechanical processes, which are mutually coupled and spatially and temporally resolved. To investigate the fundamental principles behind the self-organization of these networks, we have developed a detailed mechanochemical, stochastic model of actin filament growth dynamics, at a single-molecule resolution, where the nonlinear mechanical rigidity of filaments and their corresponding deformations under internally and externally generated forces are taken into account. Our work sheds light on the interplay between the chemical and mechanical processes governing the cytoskeletal dynamics, and also highlights the importance of diffusional and active transport phenomena. Our simulations reveal how different actomyosin micro-architectures emerge in response to varying the network composition. Support from NSF Grant CHE-1363081.

  16. Multiscale Quantum Mechanics/Molecular Mechanics Simulations with Neural Networks.

    Science.gov (United States)

    Shen, Lin; Wu, Jingheng; Yang, Weitao

    2016-10-11

    Molecular dynamics simulation with multiscale quantum mechanics/molecular mechanics (QM/MM) methods is a very powerful tool for understanding the mechanism of chemical and biological processes in solution or enzymes. However, its computational cost can be too high for many biochemical systems because of the large number of ab initio QM calculations. Semiempirical QM/MM simulations have much higher efficiency. Its accuracy can be improved with a correction to reach the ab initio QM/MM level. The computational cost on the ab initio calculation for the correction determines the efficiency. In this paper we developed a neural network method for QM/MM calculation as an extension of the neural-network representation reported by Behler and Parrinello. With this approach, the potential energy of any configuration along the reaction path for a given QM/MM system can be predicted at the ab initio QM/MM level based on the semiempirical QM/MM simulations. We further applied this method to three reactions in water to calculate the free energy changes. The free-energy profile obtained from the semiempirical QM/MM simulation is corrected to the ab initio QM/MM level with the potential energies predicted with the constructed neural network. The results are in excellent accordance with the reference data that are obtained from the ab initio QM/MM molecular dynamics simulation or corrected with direct ab initio QM/MM potential energies. Compared with the correction using direct ab initio QM/MM potential energies, our method shows a speed-up of 1 or 2 orders of magnitude. It demonstrates that the neural network method combined with the semiempirical QM/MM calculation can be an efficient and reliable strategy for chemical reaction simulations.

  17. FATIGUE LIFE EVALUATION OF SUSPENSION KNUCKLE USING MULTIBODY SIMULATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    A.G.A. Rahman

    2012-12-01

    Full Text Available Suspension is part of automotive systems, providing both vehicle control and passenger comfort. The knuckle is an important part within the suspension system, constantly encountering the cyclic loads subjecting it to fatigue failure. This paper presents an evaluation of the fatigue characteristics of a knuckle using multibody simulation (MBS techniques. Load time history extracted from the MBS is used for stress analysis. An actual road profile of road bumps was used as the input to MBS. The stress fluctuations for fatigue simulations are considered with the road profile. The strain-life method is utilized to assess the fatigue life. The instantaneous stress distributions and maximum principal stress are used for fatigue life predictions. Mesh sensitivity analysis has been performed. The results show that the steering link in the knuckle is found to be the most susceptible region for fatigue failure. The number of times the knuckle can manage a road bump at 40 km/hr is determined to be approximately 371 times with a 50% certainty of survival. The proposed method of using the loading time history extracted from MBS simulation for fatigue life estimation is found to be very promising for the accurate evaluation of the performance of suspension system components.

  18. An Efficient Neural Network Based Modeling Method for Automotive EMC Simulation

    Science.gov (United States)

    Frank, Florian; Weigel, Robert

    2011-09-01

    This paper presents a newly developed methodology for VHDL-AMS model integration into SPICE-based EMC simulations. To this end the VHDL-AMS model, which is available in a compiled version only, is characterized under typical loading conditions, and afterwards a neural network based technique is applied to convert characteristic voltage and current data into an equivalent circuit in SPICE syntax. After the explanation of the whole method and the presentation of a newly developed switched state space dynamic neural network model, the entire analysis process is demonstrated using a typical application from automotive industry.

  19. Simulation Tools and Techniques for Analyzing the Impacts of Photovoltaic System Integration

    Science.gov (United States)

    Hariri, Ali

    utility simulation software. On the other hand, EMT simulation tools provide high accuracy and visibility over a wide bandwidth of frequencies at the expense of larger processing and memory requirements, limited network size, and long simulation time. Therefore, there is a gap in simulation tools and techniques that can efficiently and effectively identify potential PV impact. New planning simulation tools are needed in order to accommodate for the simulation requirements of new integrated technologies in the electric grid. The dissertation at hand starts by identifying some of the potential impacts that are caused by high PV penetration. A phasor-based quasi-static time series (QSTS) analysis tool is developed in order to study the slow dynamics that are caused by the variations in the PV generation that lead to voltage fluctuations. Moreover, some EMT simulations are performed in order to study the impacts of PV systems on the electric network harmonic levels. These studies provide insights into the type and duration of certain impacts, as well as the conditions that may lead to adverse phenomena. In addition these studies present an idea about the type of simulation tools that are sufficient for each type of study. After identifying some of the potential impacts, certain planning tools and techniques are proposed. The potential PV impacts may cause certain utilities to refrain from integrating PV systems into their networks. However, each electric network has a certain limit beyond which the impacts become substantial and may adversely interfere with the system operation and the equipment along the feeder; this limit is referred to as the hosting limit (or hosting capacity). Therefore, it is important for utilities to identify the PV hosting limit on a specific electric network in order to safely and confidently integrate the maximum possible PV systems. In the following dissertation, two approaches have been proposed for identifying the hosing limit: 1. Analytical

  20. SIMULATION OF WIRELESS SENSOR NETWORK WITH HYBRID TOPOLOGY

    Directory of Open Access Journals (Sweden)

    J. Jaslin Deva Gifty

    2016-03-01

    Full Text Available The design of low rate Wireless Personal Area Network (WPAN by IEEE 802.15.4 standard has been developed to support lower data rates and low power consuming application. Zigbee Wireless Sensor Network (WSN works on the network and application layer in IEEE 802.15.4. Zigbee network can be configured in star, tree or mesh topology. The performance varies from topology to topology. The performance parameters such as network lifetime, energy consumption, throughput, delay in data delivery and sensor field coverage area varies depending on the network topology. In this paper, designing of hybrid topology by using two possible combinations such as star-tree and star-mesh is simulated to verify the communication reliability. This approach is to combine all the benefits of two network model. The parameters such as jitter, delay and throughput are measured for these scenarios. Further, MAC parameters impact such as beacon order (BO and super frame order (SO for low power consumption and high channel utilization, has been analysed for star, tree and mesh topology in beacon disable mode and beacon enable mode by varying CBR traffic loads.

  1. Hybrid neural network bushing model for vehicle dynamics simulation

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, Jeong Hyun [Pukyong National University, Busan (Korea, Republic of); Lee, Seung Kyu [Hyosung Corporation, Changwon (Korea, Republic of); Yoo, Wan Suk [Pusan National University, Busan (Korea, Republic of)

    2008-12-15

    Although the linear model was widely used for the bushing model in vehicle suspension systems, it could not express the nonlinear characteristics of bushing in terms of the amplitude and the frequency. An artificial neural network model was suggested to consider the hysteretic responses of bushings. This model, however, often diverges due to the uncertainties of the neural network under the unexpected excitation inputs. In this paper, a hybrid neural network bushing model combining linear and neural network is suggested. A linear model was employed to represent linear stiffness and damping effects, and the artificial neural network algorithm was adopted to take into account the hysteretic responses. A rubber test was performed to capture bushing characteristics, where sine excitation with different frequencies and amplitudes is applied. Random test results were used to update the weighting factors of the neural network model. It is proven that the proposed model has more robust characteristics than a simple neural network model under step excitation input. A full car simulation was carried out to verify the proposed bushing models. It was shown that the hybrid model results are almost identical to the linear model under several maneuvers

  2. Modeling and simulation of the USAVRE network and radiology operations

    Science.gov (United States)

    Martinez, Ralph; Bradford, Daniel Q.; Hatch, Jay; Sochan, John; Chimiak, William J.

    1998-07-01

    The U.S. Army Medical Command, lead by the Brooke Army Medical Center, has embarked on a visionary project. The U.S. Army Virtual Radiology Environment (USAVRE) is a CONUS-based network that connects all the Army's major medical centers and Regional Medical Commands (RMC). The purpose of the USAVRE is to improve the quality, access, and cost of radiology services in the Army via the use of state-of-the-art medical imaging, computer, and networking technologies. The USAVRE contains multimedia viewing workstations; database archive systems are based on a distributed computing environment using Common Object Request Broker Architecture (CORBA) middleware protocols. The underlying telecommunications network is an ATM-based backbone network that connects the RMC regional networks and PACS networks at medical centers and RMC clinics. This project is a collaborative effort between Army, university, and industry centers with expertise in teleradiology and Global PACS applications. This paper describes a model and simulation of the USAVRE for performance evaluation purposes. As a first step the results of a Technology Assessment and Requirements Analysis (TARA) -- an analysis of the workload in Army radiology departments, their equipment and their staffing. Using the TARA data and other workload information, we have developed a very detailed analysis of the workload and workflow patterns of our Medical Treatment Facilities. We are embarking on modeling and simulation strategies, which will form the foundation for the VRE network. The workload analysis is performed for each radiology modality in a RMC site. The workload consists of the number of examinations per modality, type of images per exam, number of images per exam, and size of images. The frequency for store and forward cases, second readings, and interactive consultation cases are also determined. These parameters are translated into the model described below. The model for the USAVRE is hierarchical in nature

  3. Simulation of wind turbine wakes using the actuator line technique.

    Science.gov (United States)

    Sørensen, Jens N; Mikkelsen, Robert F; Henningson, Dan S; Ivanell, Stefan; Sarmast, Sasan; Andersen, Søren J

    2015-02-28

    The actuator line technique was introduced as a numerical tool to be employed in combination with large eddy simulations to enable the study of wakes and wake interaction in wind farms. The technique is today largely used for studying basic features of wakes as well as for making performance predictions of wind farms. In this paper, we give a short introduction to the wake problem and the actuator line methodology and present a study in which the technique is employed to determine the near-wake properties of wind turbines. The presented results include a comparison of experimental results of the wake characteristics of the flow around a three-bladed model wind turbine, the development of a simple analytical formula for determining the near-wake length behind a wind turbine and a detailed investigation of wake structures based on proper orthogonal decomposition analysis of numerically generated snapshots of the wake. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  4. An enhanced simulated annealing routing algorithm for semi-diagonal torus network

    Science.gov (United States)

    Adzhar, Noraziah; Salleh, Shaharuddin

    2017-09-01

    Multiprocessor is another great technology that helps in advancing human civilization due to high demands for solving complex problems. A multiprocessing system can have a lot of replicated processor-memory pairs (henceforth regard as net) or also called as processing nodes. Each of these nodes is connected to each other through interconnection networks and passes message using a standard message passing mechanism. In this paper, we present a routing algorithm based on enhanced simulated annealing technique to provide the connection between nodes in a semi-diagonal torus (SD-Torus) network. This network is both symmetric and regular; thus, make it very beneficial in the implementation process. The main objective is to maximize the number of established connection between nodes in this SD-Torus network. In order to achieve this objective, each node must be connected in its shortest way as possible. We start our algorithm by designing shortest path algorithm based on Dijkstra’s method. While this algorithm guarantees to find the shortest path for each single net, if it exists, each routed net will form obstacle for later paths. This increases the complexity to route later nets and makes routing longer than optimal, or sometimes impossible to complete. The solution is further refined by re-routing all nets in different orders using simulated annealing method. Through simulation program, our proposed algorithm succeeded in performing complete routing up to 81 nodes with 40 nets in 9×9 SD-Torus network size.

  5. Agent-Based Simulation Analysis for Network Formation

    OpenAIRE

    神原, 李佳; 林田, 智弘; 西﨑, 一郎; 片桐, 英樹

    2009-01-01

    In the mathematical models for network formation by Bala and Goyal(2000), it is shown that a star network is the strict Nash equilibrium. However, the result of the experiments in a laboratory using human subjects by Berninghaus et al.(2007) basing on the model of Bala and Goyal indicates that players reach a strict Nash equilibrium and deviate it. In this paper, an agent-based simulation model in which artificial adaptive agents have mechanisms of decision making and learning based on nueral...

  6. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    Science.gov (United States)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  7. Location Estimation in Wireless Sensor Networks Using Spring-Relaxation Technique

    Directory of Open Access Journals (Sweden)

    Qing Zhang

    2010-05-01

    Full Text Available Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN. Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  8. Location estimation in wireless sensor networks using spring-relaxation technique.

    Science.gov (United States)

    Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M

    2010-01-01

    Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  9. Toward a Practical Technique to Halt Multiple Virus Outbreaks on Computer Networks

    OpenAIRE

    Hole, Kjell Jørgen

    2012-01-01

    The author analyzes a technique to prevent multiple simultaneous virus epidemics on any vulnerable computer network with inhomogeneous topology. The technique immunizes a small fraction of the computers and utilizes diverse software platforms to halt the virus outbreaks. The halting technique is of practical interest since a network's detailed topology need not be known.

  10. Correlated EEG Signals Simulation Based on Artificial Neural Networks.

    Science.gov (United States)

    Tomasevic, Nikola M; Neskovic, Aleksandar M; Neskovic, Natasa J

    2017-08-01

    In recent years, simulation of the human electroencephalogram (EEG) data found its important role in medical domain and neuropsychology. In this paper, a novel approach to simulation of two cross-correlated EEG signals is proposed. The proposed method is based on the principles of artificial neural networks (ANN). Contrary to the existing EEG data simulators, the ANN-based approach was leveraged solely on the experimentally acquired EEG data. More precisely, measured EEG data were utilized to optimize the simulator which consisted of two ANN models (each model responsible for generation of one EEG sequence). In order to acquire the EEG recordings, the measurement campaign was carried out on a healthy awake adult having no cognitive, physical or mental load. For the evaluation of the proposed approach, comprehensive quantitative and qualitative statistical analysis was performed considering probability distribution, correlation properties and spectral characteristics of generated EEG processes. The obtained results clearly indicated the satisfactory agreement with the measurement data.

  11. Evaluating drilling and suctioning technique in a mastoidectomy simulator.

    Science.gov (United States)

    Sewell, Christopher; Morris, Dan; Blevins, Nikolas H; Barbagli, Federico; Salisbury, Kenneth

    2007-01-01

    This paper presents several new metrics related to bone removal and suctioning technique in the context of a mastoidectomy simulator. The expertise with which decisions as to which regions of bone to remove and which to leave intact is evaluated by building a Naïve Bayes classifier using training data from known experts and novices. Since the bone voxel mesh is very large, and many voxels are always either removed or not removed regardless of expertise, the mutual information was calculated for each voxel and only the most informative voxels used for the classifier. Leave-out-one cross validation showed a high correlation of calculated expert probabilities with scores assigned by instructors. Additional metrics described in this paper include those for assessing smoothness of drill strokes, proper drill burr selection, sufficiency of suctioning, two-handed tool coordination, and application of appropriate force and velocity magnitudes as functions of distance from critical structures.

  12. Analytical decoupling techniques for fully implicit reservoir simulation

    Science.gov (United States)

    Qiao, Changhe; Wu, Shuhong; Xu, Jinchao; Zhang, Chen-Song

    2017-05-01

    This paper examines linear algebraic solvers for a given general purpose compositional simulator. In particular, the decoupling stage of the constraint pressure residual (CPR) preconditioner for linear systems arising from the fully implicit scheme is evaluated. An asymptotic analysis of the convergence behavior is given when Δt approaches zero. Based on this analysis, we propose an analytical decoupling technique, from which the pressure equation is directly related to an elliptic equation and can be solved efficiently. We show that this method ensures good convergence behavior of the algebraic solvers in a two-stage CPR-type preconditioner. We also propose a semi-analytical decoupling strategy that combines the analytical method and alternate block factorization method. Numerical experiments demonstrate the superior performance of the analytical and semi-analytical decoupling methods compared to existing methods.

  13. Design and Simulation Analysis for Integrated Vehicle Chassis-Network Control System Based on CAN Network

    Directory of Open Access Journals (Sweden)

    Wei Yu

    2016-01-01

    Full Text Available Due to the different functions of the system used in the vehicle chassis control, the hierarchical control strategy also leads to many kinds of the network topology structure. According to the hierarchical control principle, this research puts forward the integrated control strategy of the chassis based on supervision mechanism. The purpose is to consider how the integrated control architecture affects the control performance of the system after the intervention of CAN network. Based on the principle of hierarchical control and fuzzy control, a fuzzy controller is designed, which is used to monitor and coordinate the ESP, AFS, and ARS. And the IVC system is constructed with the upper supervisory controller and three subcontrol systems on the Simulink platform. The network topology structure of IVC is proposed, and the IVC communication matrix based on CAN network communication is designed. With the common sensors and the subcontrollers as the CAN network independent nodes, the network induced delay and packet loss rate on the system control performance are studied by simulation. The results show that the simulation method can be used for designing the communication network of the vehicle.

  14. Limitations of 14 MeV neutron simulation techniques

    Science.gov (United States)

    Kley, W.; Bishop, G. R.; Sinha, A.

    1988-07-01

    A D-T fusion cycle produces five times more neutrons per unit of energy released than a fission cycle, with about twice the damage energy and the capability to produce ten times more hydrogen, helium and transmutation products than fission neutrons. They determine, together with other parameters, the lifetime of the construction materials for the low plasma-density fusion reactors (tokamak, tandem-mirror, etc.), which require a first wall. For the economie feasibility of fusion power reactors the first wall and blanket materials must withstand a dose approaching 300 to 400 dpa. Arguments are presented that demonstrate that today's simulation techniques using existing fission reactors and charged particle beams are excellent tools to study the underlying basic physical phenomena of the evolving damage structures but are not sufficient to provide a valid technological data base for the design of economie fusion power reactors. It is shown than an optimized spallation neutron source based on a continuous beam of 600 MeV, 6 mA protons is suitable to simulate first wall conditions. Comparing it with FMIT the 35 MeV, 100 mA D + -Li neutron source, we arrive at the following figure of merit: FM = {(dpa·volume) EURAC}/{(dpa·volume) FMIT} = {} = 111 reflecting the fact that the proton beam generates about 100 times more neutrons than the deuteron beam in FMIT for the same beam power.

  15. Artificial neural network based approach to EEG signal simulation.

    Science.gov (United States)

    Tomasevic, Nikola M; Neskovic, Aleksandar M; Neskovic, Natasa J

    2012-06-01

    In this paper a new approach to the electroencephalogram (EEG) signal simulation based on the artificial neural networks (ANN) is proposed. The aim was to simulate the spontaneous human EEG background activity based solely on the experimentally acquired EEG data. Therefore, an EEG measurement campaign was conducted on a healthy awake adult in order to obtain an adequate ANN training data set. As demonstration of the performance of the ANN based approach, comparisons were made against autoregressive moving average (ARMA) filtering based method. Comprehensive quantitative and qualitative statistical analysis showed clearly that the EEG process obtained by the proposed method was in satisfactory agreement with the one obtained by measurements.

  16. Digitalization and networking of analog simulators and portal images.

    Science.gov (United States)

    Pesznyák, Csilla; Zaránd, Pál; Mayer, Arpád

    2007-03-01

    Many departments have analog simulators and irradiation facilities (especially cobalt units) without electronic portal imaging. Import of the images into the R&V (Record & Verify) system is required. Simulator images are grabbed while portal films scanned by using a laser scanner and both converted into DICOM RT (Digital Imaging and Communications in Medicine Radiotherapy) images. Image intensifier output of a simulator and portal films are converted to DICOM RT images and used in clinical practice. The simulator software was developed in cooperation at the authors' hospital. The digitalization of analog simulators is a valuable updating in clinical use replacing screen-film technique. Film scanning and digitalization permit the electronic archiving of films. Conversion into DICOM RT images is a precondition of importing to the R&V system.

  17. A Comparative Study of Anomaly Detection Techniques for Smart City Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Victor Garcia-Font

    2016-06-01

    Full Text Available In many countries around the world, smart cities are becoming a reality. These cities contribute to improving citizens’ quality of life by providing services that are normally based on data extracted from wireless sensor networks (WSN and other elements of the Internet of Things. Additionally, public administration uses these smart city data to increase its efficiency, to reduce costs and to provide additional services. However, the information received at smart city data centers is not always accurate, because WSNs are sometimes prone to error and are exposed to physical and computer attacks. In this article, we use real data from the smart city of Barcelona to simulate WSNs and implement typical attacks. Then, we compare frequently used anomaly detection techniques to disclose these attacks. We evaluate the algorithms under different requirements on the available network status information. As a result of this study, we conclude that one-class Support Vector Machines is the most appropriate technique. We achieve a true positive rate at least 56% higher than the rates achieved with the other compared techniques in a scenario with a maximum false positive rate of 5% and a 26% higher in a scenario with a false positive rate of 15%.

  18. A Comparative Study of Anomaly Detection Techniques for Smart City Wireless Sensor Networks.

    Science.gov (United States)

    Garcia-Font, Victor; Garrigues, Carles; Rifà-Pous, Helena

    2016-06-13

    In many countries around the world, smart cities are becoming a reality. These cities contribute to improving citizens' quality of life by providing services that are normally based on data extracted from wireless sensor networks (WSN) and other elements of the Internet of Things. Additionally, public administration uses these smart city data to increase its efficiency, to reduce costs and to provide additional services. However, the information received at smart city data centers is not always accurate, because WSNs are sometimes prone to error and are exposed to physical and computer attacks. In this article, we use real data from the smart city of Barcelona to simulate WSNs and implement typical attacks. Then, we compare frequently used anomaly detection techniques to disclose these attacks. We evaluate the algorithms under different requirements on the available network status information. As a result of this study, we conclude that one-class Support Vector Machines is the most appropriate technique. We achieve a true positive rate at least 56% higher than the rates achieved with the other compared techniques in a scenario with a maximum false positive rate of 5% and a 26% higher in a scenario with a false positive rate of 15%.

  19. A Monte Carlo simulation technique to determine the optimal portfolio

    Directory of Open Access Journals (Sweden)

    Hassan Ghodrati

    2014-03-01

    Full Text Available During the past few years, there have been several studies for portfolio management. One of the primary concerns on any stock market is to detect the risk associated with various assets. One of the recognized methods in order to measure, to forecast, and to manage the existing risk is associated with Value at Risk (VaR, which draws much attention by financial institutions in recent years. VaR is a method for recognizing and evaluating of risk, which uses the standard statistical techniques and the method has been used in other fields, increasingly. The present study has measured the value at risk of 26 companies from chemical industry in Tehran Stock Exchange over the period 2009-2011 using the simulation technique of Monte Carlo with 95% confidence level. The used variability in the present study has been the daily return resulted from the stock daily price change. Moreover, the weight of optimal investment has been determined using a hybrid model called Markowitz and Winker model in each determined stocks. The results showed that the maximum loss would not exceed from 1259432 Rials at 95% confidence level in future day.

  20. Frequency and motivational state: evolutionary simulations suggest an adaptive function for network oscillations

    NARCIS (Netherlands)

    Heerebout, B.T.; Phaf, R.H.; Taatgen, N.A.; van Rijn, H.

    2009-01-01

    Evolutionary simulations of foraging agents, controlled by artificial neural networks, unexpectedly yielded oscillating node activations in the networks. The agents had to navigate a virtual environment to collect food while avoiding predation. Between generations their neural networks were

  1. Intrusion Detection Systems Based on Artificial Intelligence Techniques in Wireless Sensor Networks

    OpenAIRE

    Nabil Ali Alrajeh; Lloret, J.

    2013-01-01

    Intrusion detection system (IDS) is regarded as the second line of defense against network anomalies and threats. IDS plays an important role in network security. There are many techniques which are used to design IDSs for specific scenario and applications. Artificial intelligence techniques are widely used for threats detection. This paper presents a critical study on genetic algorithm, artificial immune, and artificial neural network (ANN) based IDSs techniques used in wireless sensor netw...

  2. A simulated annealing approach for redesigning a warehouse network problem

    Science.gov (United States)

    Khairuddin, Rozieana; Marlizawati Zainuddin, Zaitul; Jiun, Gan Jia

    2017-09-01

    Now a day, several companies consider downsizing their distribution networks in ways that involve consolidation or phase-out of some of their current warehousing facilities due to the increasing competition, mounting cost pressure and taking advantage on the economies of scale. Consequently, the changes on economic situation after a certain period of time require an adjustment on the network model in order to get the optimal cost under the current economic conditions. This paper aimed to develop a mixed-integer linear programming model for a two-echelon warehouse network redesign problem with capacitated plant and uncapacitated warehouses. The main contribution of this study is considering capacity constraint for existing warehouses. A Simulated Annealing algorithm is proposed to tackle with the proposed model. The numerical solution showed the model and method of solution proposed was practical.

  3. Computer simulation of randomly cross-linked polymer networks

    CERN Document Server

    Williams, T P

    2002-01-01

    In this work, Monte Carlo and Stochastic Dynamics computer simulations of mesoscale model randomly cross-linked networks were undertaken. Task parallel implementations of the lattice Monte Carlo Bond Fluctuation model and Kremer-Grest Stochastic Dynamics bead-spring continuum model were designed and used for this purpose. Lattice and continuum precursor melt systems were prepared and then cross-linked to varying degrees. The resultant networks were used to study structural changes during deformation and relaxation dynamics. The effects of a random network topology featuring a polydisperse distribution of strand lengths and an abundance of pendant chain ends, were qualitatively compared to recent published work. A preliminary investigation into the effects of temperature on the structural and dynamical properties was also undertaken. Structural changes during isotropic swelling and uniaxial deformation, revealed a pronounced non-affine deformation dependant on the degree of cross-linking. Fractal heterogeneiti...

  4. NCC Simulation Model: Simulating the operations of the network control center, phase 2

    Science.gov (United States)

    Benjamin, Norman M.; Paul, Arthur S.; Gill, Tepper L.

    1992-12-01

    The simulation of the network control center (NCC) is in the second phase of development. This phase seeks to further develop the work performed in phase one. Phase one concentrated on the computer systems and interconnecting network. The focus of phase two will be the implementation of the network message dialogues and the resources controlled by the NCC. These resources are requested, initiated, monitored and analyzed via network messages. In the NCC network messages are presented in the form of packets that are routed across the network. These packets are generated, encoded, decoded and processed by the network host processors that generate and service the message traffic on the network that connects these hosts. As a result, the message traffic is used to characterize the work done by the NCC and the connected network. Phase one of the model development represented the NCC as a network of bi-directional single server queues and message generating sources. The generators represented the external segment processors. The served based queues represented the host processors. The NCC model consists of the internal and external processors which generate message traffic on the network that links these hosts. To fully realize the objective of phase two it is necessary to identify and model the processes in each internal processor. These processes live in the operating system of the internal host computers and handle tasks such as high speed message exchanging, ISN and NFE interface, event monitoring, network monitoring, and message logging. Inter process communication is achieved through the operating system facilities. The overall performance of the host is determined by its ability to service messages generated by both internal and external processors.

  5. Analyzing, Modeling, and Simulation for Human Dynamics in Social Network

    Directory of Open Access Journals (Sweden)

    Yunpeng Xiao

    2012-01-01

    Full Text Available This paper studies the human behavior in the top-one social network system in China (Sina Microblog system. By analyzing real-life data at a large scale, we find that the message releasing interval (intermessage time obeys power law distribution both at individual level and at group level. Statistical analysis also reveals that human behavior in social network is mainly driven by four basic elements: social pressure, social identity, social participation, and social relation between individuals. Empirical results present the four elements' impact on the human behavior and the relation between these elements. To further understand the mechanism of such dynamic phenomena, a hybrid human dynamic model which combines “interest” of individual and “interaction” among people is introduced, incorporating the four elements simultaneously. To provide a solid evaluation, we simulate both two-agent and multiagent interactions with real-life social network topology. We achieve the consistent results between empirical studies and the simulations. The model can provide a good understanding of human dynamics in social network.

  6. [Simulation of lung motions using an artificial neural network].

    Science.gov (United States)

    Laurent, R; Henriet, J; Salomon, M; Sauget, M; Nguyen, F; Gschwind, R; Makovicka, L

    2011-04-01

    A way to improve the accuracy of lung radiotherapy for a patient is to get a better understanding of its lung motion. Indeed, thanks to this knowledge it becomes possible to follow the displacements of the clinical target volume (CTV) induced by the lung breathing. This paper presents a feasibility study of an original method to simulate the positions of points in patient's lung at all breathing phases. This method, based on an artificial neural network, allowed learning the lung motion on real cases and then to simulate it for new patients for which only the beginning and the end breathing data are known. The neural network learning set is made up of more than 600 points. These points, shared out on three patients and gathered on a specific lung area, were plotted by a MD. The first results are promising: an average accuracy of 1mm is obtained for a spatial resolution of 1 × 1 × 2.5mm(3). We have demonstrated that it is possible to simulate lung motion with accuracy using an artificial neural network. As future work we plan to improve the accuracy of our method with the addition of new patient data and a coverage of the whole lungs. Copyright © 2010 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  7. Flow MRI simulation in complex 3D geometries: Application to the cerebral venous network.

    Science.gov (United States)

    Fortin, Alexandre; Salmon, Stéphanie; Baruthio, Joseph; Delbany, Maya; Durand, Emmanuel

    2018-02-05

    Develop and evaluate a complete tool to include 3D fluid flows in MRI simulation, leveraging from existing software. Simulation of MR spin flow motion is of high interest in the study of flow artifacts and angiography. However, at present, only a few simulators include this option and most are restricted to static tissue imaging. An extension of JEMRIS, one of the most advanced high performance open-source simulation platforms to date, was developed. The implementation of a Lagrangian description of the flow allows simulating any MR experiment, including both static tissues and complex flow data from computational fluid dynamics. Simulations of simple flow models are compared with real experiments on a physical flow phantom. A realistic simulation of 3D flow MRI on the cerebral venous network is also carried out. Simulations and real experiments are in good agreement. The generality of the framework is illustrated in 2D and 3D with some common flow artifacts (misregistration and inflow enhancement) and with the three main angiographic techniques: phase contrast velocimetry (PC), time-of-flight, and contrast-enhanced imaging MRA. The framework provides a versatile and reusable tool for the simulation of any MRI experiment including physiological fluids and arbitrarily complex flow motion. © 2018 International Society for Magnetic Resonance in Medicine.

  8. FUMET: A fuzzy network module extraction technique for gene ...

    Indian Academy of Sciences (India)

    Supplementary figure 1. (A): Visualization of one of the network modules by GeneMania for dataset 4 (B): Visualization of one of the network modules by GeneMania for dataset 1 (C): Visualization of one of the network modules by GeneMania for dataset 3.

  9. Exact subthreshold integration with continuous spike times in discrete-time neural network simulations.

    Science.gov (United States)

    Morrison, Abigail; Straube, Sirko; Plesser, Hans Ekkehard; Diesmann, Markus

    2007-01-01

    Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.

  10. An Expert System And Simulation Approach For Sensor Management & Control In A Distributed Surveillance Network

    Science.gov (United States)

    Leon, Barbara D.; Heller, Paul R.

    1987-05-01

    A surveillance network is a group of multiplatform sensors cooperating to improve network performance. Network control is distributed as a measure to decrease vulnerability to enemy threat. The network may contain diverse sensor types such as radar, ESM (Electronic Support Measures), IRST (Infrared search and track) and E-0 (Electro-Optical). Each platform may contain a single sensor or suite of sensors. In a surveillance network it is desirable to control sensors to make the overall system more effective. This problem has come to be known as sensor management and control (SM&C). Two major facets of network performance are surveillance and survivability. In a netted environment, surveillance can be enhanced if information from all sensors is combined and sensor operating conditions are controlled to provide a synergistic effect. In contrast, when survivability is the main concern for the network, the best operating status for all sensors would be passive or off. Of course, improving survivability tends to degrade surveillance. Hence, the objective of SM&C is to optimize surveillance and survivability of the network. Too voluminous data of various formats and the quick response time are two characteristics of this problem which make it an ideal application for Artificial Intelligence. A solution to the SM&C problem, presented as a computer simulation, will be presented in this paper. The simulation is a hybrid production written in LISP and FORTRAN. It combines the latest conventional computer programming methods with Artificial Intelligence techniques to produce a flexible state-of-the-art tool to evaluate network performance. The event-driven simulation contains environment models coupled with an expert system. These environment models include sensor (track-while-scan and agile beam) and target models, local tracking, and system tracking. These models are used to generate the environment for the sensor management and control expert system. The expert system

  11. Modelling Altitude Information in Two-Dimensional Traffic Networks for Electric Mobility Simulation

    OpenAIRE

    Diogo Santos; José Pinto; Rossetti, Rosaldo J. F.; Eugénio Oliveira

    2016-01-01

    Elevation data is important for electric vehicle simulation. However, traffic simulators are often two-dimensional and do not offer the capability of modelling urban networks taking elevation into account. Specifically, SUMO - Simulation of Urban Mobility, a popular microscopic traffic simulator, relies on networks previously modelled with elevation data as to provide this information during simulations. This work tackles the problem of adding elevation data to urban network models - particul...

  12. Network Flow Simulation of Fluid Transients in Rocket Propulsion Systems

    Science.gov (United States)

    Bandyopadhyay, Alak; Hamill, Brian; Ramachandran, Narayanan; Majumdar, Alok

    2011-01-01

    Fluid transients, also known as water hammer, can have a significant impact on the design and operation of both spacecraft and launch vehicle propulsion systems. These transients often occur at system activation and shutdown. The pressure rise due to sudden opening and closing of valves of propulsion feed lines can cause serious damage during activation and shutdown of propulsion systems. During activation (valve opening) and shutdown (valve closing), pressure surges must be predicted accurately to ensure structural integrity of the propulsion system fluid network. In the current work, a network flow simulation software (Generalized Fluid System Simulation Program) based on Finite Volume Method has been used to predict the pressure surges in the feed line due to both valve closing and valve opening using two separate geometrical configurations. The valve opening pressure surge results are compared with experimental data available in the literature and the numerical results compared very well within reasonable accuracy (simulation results are compared with the results of Method of Characteristics. Most rocket engines experience a longitudinal acceleration, known as "pogo" during the later stage of engine burn. In the shutdown example problem, an accumulator has been used in the feed system to demonstrate the "pogo" mitigation effects in the feed system of propellant. The simulation results using GFSSP compared very well with the results of Method of Characteristics.

  13. Efficiently passing messages in distributed spiking neural network simulation.

    Science.gov (United States)

    Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan

    2013-01-01

    Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.

  14. An artifical neural network for detection of simulated dental caries

    Energy Technology Data Exchange (ETDEWEB)

    Kositbowornchai, S. [Khon Kaen Univ. (Thailand). Dept. of Oral Diagnosis; Siriteptawee, S.; Plermkamon, S.; Bureerat, S. [Khon Kaen Univ. (Thailand). Dept. of Mechanical Engineering; Chetchotsak, D. [Khon Kaen Univ. (Thailand). Dept. of Industrial Engineering

    2006-08-15

    Objects: A neural network was developed to diagnose artificial dental caries using images from a charged-coupled device (CCD)camera and intra-oral digital radiography. The diagnostic performance of this neural network was evaluated against a gold standard. Materials and methods: The neural network design was the Learning Vector Quantization (LVQ) used to classify a tooth surface as sound or as having dental caries. The depth of the dental caries was indicated on a graphic user interface (GUI) screen developed by Matlab programming. Forty-nine images of both sound and simulated dental caries, derived from a CCD camera and by digital radiography, were used to 'train' an artificial neural network. After the 'training' process, a separate test-set comprising 322 unseen images was evaluated. Tooth sections and microscopic examinations were used to confirm the actual dental caries status.The performance of neural network was evaluated using diagnostic test. Results: The sensitivity (95%CI)/specificity (95%CI) of dental caries detection by the CCD camera and digital radiography were 0.77(0.68-0.85)/0.85(0.75-0.92) and 0.81(0.72-0.88)/0.93(0.84-0.97), respectively. The accuracy of caries depth-detection by the CCD camera and digital radiography was 58 and 40%, respectively. Conclusions: The model neural network used in this study could be a prototype for caries detection but should be improved for classifying caries depth. Our study suggests an artificial neural network can be trained to make the correct interpretations of dental caries. (orig.)

  15. Biochemical Network Stochastic Simulator (BioNetS: software for stochastic modeling of biochemical networks

    Directory of Open Access Journals (Sweden)

    Elston Timothy C

    2004-03-01

    Full Text Available Abstract Background Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. Results We have developed the software package Biochemical Network Stochastic Simulator (BioNetS for efficientlyand accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solvesthe appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. Conclusions We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.

  16. Methodologies for the modeling and simulation of biochemical networks, illustrated for signal transduction pathways: a primer.

    Science.gov (United States)

    ElKalaawy, Nesma; Wassal, Amr

    2015-03-01

    Biochemical networks depict the chemical interactions that take place among elements of living cells. They aim to elucidate how cellular behavior and functional properties of the cell emerge from the relationships between its components, i.e. molecules. Biochemical networks are largely characterized by dynamic behavior, and exhibit high degrees of complexity. Hence, the interest in such networks is growing and they have been the target of several recent modeling efforts. Signal transduction pathways (STPs) constitute a class of biochemical networks that receive, process, and respond to stimuli from the environment, as well as stimuli that are internal to the organism. An STP consists of a chain of intracellular signaling processes that ultimately result in generating different cellular responses. This primer presents the methodologies used for the modeling and simulation of biochemical networks, illustrated for STPs. These methodologies range from qualitative to quantitative, and include structural as well as dynamic analysis techniques. We describe the different methodologies, outline their underlying assumptions, and provide an assessment of their advantages and disadvantages. Moreover, publicly and/or commercially available implementations of these methodologies are listed as appropriate. In particular, this primer aims to provide a clear introduction and comprehensive coverage of biochemical modeling and simulation methodologies for the non-expert, with specific focus on relevant literature of STPs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Neural networks: A simulation technique under uncertainty conditions

    Science.gov (United States)

    Mcallister, M. Luisa Nicosia

    1992-01-01

    This paper proposes a new definition of fuzzy graphs and shows how transmission through a graph with linguistic expressions as labels provides an easy computational tool. These labels are represented by modified Kauffmann Fuzzy numbers.

  18. Supply chain simulation tools and techniques: a survey

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2005-01-01

    The main contribution of this paper is twofold: it surveys different types of simulation for supply chain management; it discusses several methodological issues. These different types of simulation are spreadsheet simulation, system dynamics, discrete-event simulation and business games. Which

  19. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  20. Parallel pic plasma simulation through particle decomposition techniques

    Energy Technology Data Exchange (ETDEWEB)

    Briguglio, S.; Vlad, G. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dipt. Energia; Di Martino, B. [Wien Univ. (Austria). Inst. for Software Tecnology and Parallel Systems]|[Naples, Univ. `Federico II` (Italy). Dipt. di Informatica e Sistemistica

    1998-02-01

    Particle-in-cell (PIC) codes are among the major candidates to yield a satisfactory description of the detail of kinetic effects, such as the resonant wave-particle interaction, relevant in determining the transport mechanism in magnetically confined plasmas. A significant improvement of the simulation performance of such codes con be expected from parallelization, e.g., by distributing the particle population among several parallel processors. Parallelization of a hybrid magnetohydrodynamic-gyrokinetic code has been accomplished within the High Performance Fortran (HPF) framework, and tested on the IBM SP2 parallel system, using a `particle decomposition` technique. The adopted technique requires a moderate effort in porting the code in parallel form and results in intrinsic load balancing and modest inter processor communication. The performance tests obtained confirm the hypothesis of high effectiveness of the strategy, if targeted towards moderately parallel architectures. Optimal use of resources is also discussed with reference to a specific physics problem. [Italiano] I codici Particle-in-cell (PIC) sono considerati tra i piu` promettenti candidati per ottenere una descrizione soddisfacente e dettagliata degli effetti cinetici, quali per esempio l`interazione risonante particella-onda, rilevanti nel determinare i meccanismi di trasporto che interessano il confinamento del plasma. Un significativo miglioramento delle prestazioni della simulazione puo` essere ottenuto distribuendo la popolazione di particelle tra diversi processori in parallelo. La parallelizzazione di un codice ibrido MHD-girocinetico e` stata effettuata, in ambiente HPF, utilizzando la tecnica di `decomposizione per particelle`, ed e` stata provata sul sistema parallelo IBM SP2. La tecnica adottata richiede uno sforzo moderato per la trasformazione del codice in versione parallela, permette un intrinseco bilanciamento tra i processori del carico di lavoro e necessita di una modesta

  1. Improved Space Surveillance Network (SSN) Scheduling using Artificial Intelligence Techniques

    Science.gov (United States)

    Stottler, D.

    There are close to 20,000 cataloged manmade objects in space, the large majority of which are not active, functioning satellites. These are tracked by phased array and mechanical radars and ground and space-based optical telescopes, collectively known as the Space Surveillance Network (SSN). A better SSN schedule of observations could, using exactly the same legacy sensor resources, improve space catalog accuracy through more complementary tracking, provide better responsiveness to real-time changes, better track small debris in low earth orbit (LEO) through efficient use of applicable sensors, efficiently track deep space (DS) frequent revisit objects, handle increased numbers of objects and new types of sensors, and take advantage of future improved communication and control to globally optimize the SSN schedule. We have developed a scheduling algorithm that takes as input the space catalog and the associated covariance matrices and produces a globally optimized schedule for each sensor site as to what objects to observe and when. This algorithm is able to schedule more observations with the same sensor resources and have those observations be more complementary, in terms of the precision with which each orbit metric is known, to produce a satellite observation schedule that, when executed, minimizes the covariances across the entire space object catalog. If used operationally, the results would be significantly increased accuracy of the space catalog with fewer lost objects with the same set of sensor resources. This approach inherently can also trade-off fewer high priority tasks against more lower-priority tasks, when there is benefit in doing so. Currently the project has completed a prototyping and feasibility study, using open source data on the SSN's sensors, that showed significant reduction in orbit metric covariances. The algorithm techniques and results will be discussed along with future directions for the research.

  2. Advanced techniques for multicast service provision in core transport networks

    OpenAIRE

    Fernández del Carpio, Gonzalo

    2012-01-01

    Although the network-based multicast service is the optimal way to support of a large variety of popular applications such as high-definition television (HDTV), videoon- demand (VoD), virtual private LAN service (VPLS), grid computing, optical storage area networks (O-SAN), video conferencing, e-learning, massive multiplayer online role-playing games (MMORPG), networked virtual reality, etc., there are a number of technological and operational reasons that prevents a wider deployment. This Ph...

  3. Methodologies and techniques for analysis of network flow data

    Energy Technology Data Exchange (ETDEWEB)

    Bobyshev, A.; Grigoriev, M.; /Fermilab

    2004-12-01

    Network flow data gathered at the border routers and core switches is used at Fermilab for statistical analysis of traffic patterns, passive network monitoring, and estimation of network performance characteristics. Flow data is also a critical tool in the investigation of computer security incidents. Development and enhancement of flow based tools is an on-going effort. This paper describes the most recent developments in flow analysis at Fermilab.

  4. Green's-function reaction dynamics: A particle-based approach for simulating biochemical networks in time and space

    NARCIS (Netherlands)

    van Zon, J.S.; ten Wolde, P.R.

    2005-01-01

    We have developed a new numerical technique, called Green's-function reaction dynamics (GFRD), that makes it possible to simulate biochemical networks at the particle level and in both time and space. In this scheme, a maximum time step is chosen such that only single particles or pairs of particles

  5. Computer Simulations of Bottlebrush Melts and Soft Networks

    Science.gov (United States)

    Cao, Zhen; Carrillo, Jan-Michael; Sheiko, Sergei; Dobrynin, Andrey

    We have studied dense bottlebrush systems in a melt and network state using a combination of the molecular dynamics simulations and analytical calculations. Our simulations show that the bottlebrush macromolecules in a melt behave as ideal chains with the effective Kuhn length bK. The bottlebrush induced bending rigidity is due to redistribution of the side chains upon backbone bending. Kuhn length of the bottlebrushes increases with increasing the side-chain degree of polymerization nsc as bK ~nsc0 . 46 . This model of bottlebrush macromolecules is extended to describe mechanical properties of bottlebrush networks in linear and nonlinear deformation regimes. In the linear deformation regime, the network shear modulus scales with the degree of polymerization of the side chains as G0 ~nsc + 1 - 1 as long as the ratio of the Kuhn length to the size of the fully extended bottlebrush backbone between crosslinks, Rmax, is smaller than unity, bK /Rmax crosslinks. Nsf DMR-1409710 DMR-1436201.

  6. Quantum versus simulated annealing in wireless interference network optimization.

    Science.gov (United States)

    Wang, Chi; Chen, Huo; Jonckheere, Edmond

    2016-05-16

    Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking-more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed.

  7. Prediction of Monthly Summer Monsoon Rainfall Using Global Climate Models Through Artificial Neural Network Technique

    Science.gov (United States)

    Nair, Archana; Singh, Gurjeet; Mohanty, U. C.

    2018-01-01

    The monthly prediction of summer monsoon rainfall is very challenging because of its complex and chaotic nature. In this study, a non-linear technique known as Artificial Neural Network (ANN) has been employed on the outputs of Global Climate Models (GCMs) to bring out the vagaries inherent in monthly rainfall prediction. The GCMs that are considered in the study are from the International Research Institute (IRI) (2-tier CCM3v6) and the National Centre for Environmental Prediction (Coupled-CFSv2). The ANN technique is applied on different ensemble members of the individual GCMs to obtain monthly scale prediction over India as a whole and over its spatial grid points. In the present study, a double-cross-validation and simple randomization technique was used to avoid the over-fitting during training process of the ANN model. The performance of the ANN-predicted rainfall from GCMs is judged by analysing the absolute error, box plots, percentile and difference in linear error in probability space. Results suggest that there is significant improvement in prediction skill of these GCMs after applying the ANN technique. The performance analysis reveals that the ANN model is able to capture the year to year variations in monsoon months with fairly good accuracy in extreme years as well. ANN model is also able to simulate the correct signs of rainfall anomalies over different spatial points of the Indian domain.

  8. Prediction of Monthly Summer Monsoon Rainfall Using Global Climate Models Through Artificial Neural Network Technique

    Science.gov (United States)

    Nair, Archana; Singh, Gurjeet; Mohanty, U. C.

    2017-08-01

    The monthly prediction of summer monsoon rainfall is very challenging because of its complex and chaotic nature. In this study, a non-linear technique known as Artificial Neural Network (ANN) has been employed on the outputs of Global Climate Models (GCMs) to bring out the vagaries inherent in monthly rainfall prediction. The GCMs that are considered in the study are from the International Research Institute (IRI) (2-tier CCM3v6) and the National Centre for Environmental Prediction (Coupled-CFSv2). The ANN technique is applied on different ensemble members of the individual GCMs to obtain monthly scale prediction over India as a whole and over its spatial grid points. In the present study, a double-cross-validation and simple randomization technique was used to avoid the over-fitting during training process of the ANN model. The performance of the ANN-predicted rainfall from GCMs is judged by analysing the absolute error, box plots, percentile and difference in linear error in probability space. Results suggest that there is significant improvement in prediction skill of these GCMs after applying the ANN technique. The performance analysis reveals that the ANN model is able to capture the year to year variations in monsoon months with fairly good accuracy in extreme years as well. ANN model is also able to simulate the correct signs of rainfall anomalies over different spatial points of the Indian domain.

  9. Leader neurons in leaky integrate and fire neural network simulations.

    Science.gov (United States)

    Zbinden, Cyrille

    2011-10-01

    In this paper, we highlight the topological properties of leader neurons whose existence is an experimental fact. Several experimental studies show the existence of leader neurons in population bursts of activity in 2D living neural networks (Eytan and Marom, J Neurosci 26(33):8465-8476, 2006; Eckmann et al., New J Phys 10(015011), 2008). A leader neuron is defined as a neuron which fires at the beginning of a burst (respectively network spike) more often than we expect by chance considering its mean firing rate. This means that leader neurons have some burst triggering power beyond a chance-level statistical effect. In this study, we characterize these leader neuron properties. This naturally leads us to simulate neural 2D networks. To build our simulations, we choose the leaky integrate and fire (lIF) neuron model (Gerstner and Kistler 2002; Cessac, J Math Biol 56(3):311-345, 2008), which allows fast simulations (Izhikevich, IEEE Trans Neural Netw 15(5):1063-1070, 2004; Gerstner and Naud, Science 326:379-380, 2009). The dynamics of our lIF model has got stable leader neurons in the burst population that we simulate. These leader neurons are excitatory neurons and have a low membrane potential firing threshold. Except for these two first properties, the conditions required for a neuron to be a leader neuron are difficult to identify and seem to depend on several parameters involved in the simulations themselves. However, a detailed linear analysis shows a trend of the properties required for a neuron to be a leader neuron. Our main finding is: A leader neuron sends signals to many excitatory neurons as well as to few inhibitory neurons and a leader neuron receives only signals from few other excitatory neurons. Our linear analysis exhibits five essential properties of leader neurons each with different relative importance. This means that considering a given neural network with a fixed mean number of connections per neuron, our analysis gives us a way of

  10. Analysis of sensor network observations during some simulated landslide experiments

    Science.gov (United States)

    Scaioni, M.; Lu, P.; Feng, T.; Chen, W.; Wu, H.; Qiao, G.; Liu, C.; Tong, X.; Li, R.

    2012-12-01

    A multi-sensor network was tested during some experiments on a landslide simulation platform established at Tongji University (Shanghai, P.R. China). Here landslides were triggered by means of artificial rainfall (see Figure 1). The sensor network currently incorporates contact sensors and two imaging systems. This represent a novel solution, because the spatial sensor network incorporate either contact sensors and remote sensors (video-cameras). In future, these sensors will be installed on two real ground slopes in Sichuan province (South-West China), where Wenchuan earthquake occurred in 2008. This earthquake caused the immediate activation of several landslide, while other area became unstable and still are a menace for people and properties. The platform incorporates the reconstructed scale slope, sensor network, communication system, database and visualization system. Some landslide simulation experiments allowed ascertaining which sensors could be more suitable to be deployed in Wenchuan area. The poster will focus on the analysis of results coming from down scale simulations. Here the different steps of the landslide evolution can be followed on the basis of sensor observations. This include underground sensors to detect the water table level and the pressure in the ground, a set of accelerometers and two inclinometers. In the first part of the analysis the full data series are investigated to look for correlations and common patterns, as well as to link them to the physical processes. In the second, 4 subsets of sensors located in neighbor positions are analyzed. The analysis of low- and high-speed image sequences allowed to track a dense field of displacement on the slope surface. These outcomes have been compared to the ones obtained from accelerometers for cross-validation. Images were also used for the photogrammetric reconstruction of the slope topography during the experiment. Consequently, volume computation and mass movements could be evaluated on

  11. Measuring the influence of networks on transaction costs using a non-parametric regression technique

    DEFF Research Database (Denmark)

    Henningsen, Géraldine; Henningsen, Arne; Henning, Christian H.C.A.

    . We empirically analyse the effect of networks on productivity using a cross-validated local linear non-parametric regression technique and a data set of 384 farms in Poland. Our empirical study generally supports our hypothesis that networks affect productivity. Large and dense trading networks...

  12. Cross-Layer Techniques for Adaptive Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Yufeng Shan

    2005-02-01

    Full Text Available Real-time streaming media over wireless networks is a challenging proposition due to the characteristics of video data and wireless channels. In this paper, we propose a set of cross-layer techniques for adaptive real-time video streaming over wireless networks. The adaptation is done with respect to both channel and data. The proposed novel packetization scheme constructs the application layer packet in such a way that it is decomposed exactly into an integer number of equal-sized radio link protocol (RLP packets. FEC codes are applied within an application packet at the RLP packet level rather than across different application packets and thus reduce delay at the receiver. A priority-based ARQ, together with a scheduling algorithm, is applied at the application layer to retransmit only the corrupted RLP packets within an application layer packet. Our approach combines the flexibility and programmability of application layer adaptations, with low delay and bandwidth efficiency of link layer techniques. Socket-level simulations are presented to verify the effectiveness of our approach.

  13. Cross-Layer Techniques for Adaptive Video Streaming over Wireless Networks

    Science.gov (United States)

    Shan, Yufeng

    2005-12-01

    Real-time streaming media over wireless networks is a challenging proposition due to the characteristics of video data and wireless channels. In this paper, we propose a set of cross-layer techniques for adaptive real-time video streaming over wireless networks. The adaptation is done with respect to both channel and data. The proposed novel packetization scheme constructs the application layer packet in such a way that it is decomposed exactly into an integer number of equal-sized radio link protocol (RLP) packets. FEC codes are applied within an application packet at the RLP packet level rather than across different application packets and thus reduce delay at the receiver. A priority-based ARQ, together with a scheduling algorithm, is applied at the application layer to retransmit only the corrupted RLP packets within an application layer packet. Our approach combines the flexibility and programmability of application layer adaptations, with low delay and bandwidth efficiency of link layer techniques. Socket-level simulations are presented to verify the effectiveness of our approach.

  14. A Survey of Neural Network Techniques for Feature Extraction from Text

    OpenAIRE

    John, Vineet

    2017-01-01

    This paper aims to catalyze the discussions about text feature extraction techniques using neural network architectures. The research questions discussed in the paper focus on the state-of-the-art neural network techniques that have proven to be useful tools for language processing, language generation, text classification and other computational linguistics tasks.

  15. Artificial neural network simulator for SOFC performance prediction

    Science.gov (United States)

    Arriagada, Jaime; Olausson, Pernilla; Selimovic, Azra

    This paper describes the development of a novel modelling tool for evaluation of solid oxide fuel cell (SOFC) performance. An artificial neural network (ANN) is trained with a reduced amount of data generated by a validated cell model, and it is then capable of learning the generic functional relationship between inputs and outputs of the system. Once the network is trained, the ANN-driven simulator can predict different operational parameters of the SOFC (i.e. gas flows, operational voltages, current density, etc.) avoiding the detailed description of the fuel cell processes. The highly parallel connectivity within the ANN further reduces the computational time. In a real case, the necessary data for training the ANN simulator would be extracted from experiments. This simulator could be suitable for different applications in the fuel cell field, such as, the construction of performance maps and operating point optimisation and analysis. All this is performed with minimum time demand and good accuracy. This intelligent model together with the operational conditions may provide useful insight into SOFC operating characteristics and improved means of selecting operating conditions, reducing costs and the need for extensive experiments.

  16. COEL: A Cloud-based Reaction Network Simulator

    Directory of Open Access Journals (Sweden)

    Peter eBanda

    2016-04-01

    Full Text Available Chemical Reaction Networks (CRNs are a formalism to describe the macroscopic behavior of chemical systems. We introduce COEL, a web- and cloud-based CRN simulation framework that does not require a local installation, runs simulations on a large computational grid, provides reliable database storage, and offers a visually pleasing and intuitive user interface. We present an overview of the underlying software, the technologies, and the main architectural approaches employed. Some of COEL's key features include ODE-based simulations of CRNs and multicompartment reaction networks with rich interaction options, a built-in plotting engine, automatic DNA-strand displacement transformation and visualization, SBML/Octave/Matlab export, and a built-in genetic-algorithm-based optimization toolbox for rate constants.COEL is an open-source project hosted on GitHub (http://dx.doi.org/10.5281/zenodo.46544, which allows interested research groups to deploy it on their own sever. Regular users can simply use the web instance at no cost at http://coel-sim.org. The framework is ideally suited for a collaborative use in both research and education.

  17. Inference, simulation, modeling, and analysis of complex networks, with special emphasis on complex networks in systems biology

    Science.gov (United States)

    Christensen, Claire Petra

    Across diverse fields ranging from physics to biology, sociology, and economics, the technological advances of the past decade have engendered an unprecedented explosion of data on highly complex systems with thousands, if not millions of interacting components. These systems exist at many scales of size and complexity, and it is becoming ever-more apparent that they are, in fact, universal, arising in every field of study. Moreover, they share fundamental properties---chief among these, that the individual interactions of their constituent parts may be well-understood, but the characteristic behaviour produced by the confluence of these interactions---by these complex networks---is unpredictable; in a nutshell, the whole is more than the sum of its parts. There is, perhaps, no better illustration of this concept than the discoveries being made regarding complex networks in the biological sciences. In particular, though the sequencing of the human genome in 2003 was a remarkable feat, scientists understand that the "cellular-level blueprints" for the human being are cellular-level parts lists, but they say nothing (explicitly) about cellular-level processes. The challenge of modern molecular biology is to understand these processes in terms of the networks of parts---in terms of the interactions among proteins, enzymes, genes, and metabolites---as it is these processes that ultimately differentiate animate from inanimate, giving rise to life! It is the goal of systems biology---an umbrella field encapsulating everything from molecular biology to epidemiology in social systems---to understand processes in terms of fundamental networks of core biological parts, be they proteins or people. By virtue of the fact that there are literally countless complex systems, not to mention tools and techniques used to infer, simulate, analyze, and model these systems, it is impossible to give a truly comprehensive account of the history and study of complex systems. The author

  18. A Network Scheduling Model for Distributed Control Simulation

    Science.gov (United States)

    Culley, Dennis; Thomas, George; Aretskin-Hariton, Eliot

    2016-01-01

    Distributed engine control is a hardware technology that radically alters the architecture for aircraft engine control systems. Of its own accord, it does not change the function of control, rather it seeks to address the implementation issues for weight-constrained vehicles that can limit overall system performance and increase life-cycle cost. However, an inherent feature of this technology, digital communication networks, alters the flow of information between critical elements of the closed-loop control. Whereas control information has been available continuously in conventional centralized control architectures through virtue of analog signaling, moving forward, it will be transmitted digitally in serial fashion over the network(s) in distributed control architectures. An underlying effect is that all of the control information arrives asynchronously and may not be available every loop interval of the controller, therefore it must be scheduled. This paper proposes a methodology for modeling the nominal data flow over these networks and examines the resulting impact for an aero turbine engine system simulation.

  19. Wireless multimedia sensor networks on reconfigurable hardware information reduction techniques

    CERN Document Server

    Ang, Li-minn; Chew, Li Wern; Yeong, Lee Seng; Chia, Wai Chong

    2013-01-01

    Traditional wireless sensor networks (WSNs) capture scalar data such as temperature, vibration, pressure, or humidity. Motivated by the success of WSNs and also with the emergence of new technology in the form of low-cost image sensors, researchers have proposed combining image and audio sensors with WSNs to form wireless multimedia sensor networks (WMSNs).

  20. Outlier detection techniques for wireless sensor networks: A survey

    NARCIS (Netherlands)

    Zhang, Y.; Meratnia, Nirvana; Havinga, Paul J.M.

    2010-01-01

    In the field of wireless sensor networks, those measurements that significantly deviate from the normal pattern of sensed data are considered as outliers. The potential sources of outliers include noise and errors, events, and malicious attacks on the network. Traditional outlier detection

  1. Design and simulation of a nanoelectronic DG MOSFET current source using artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Djeffal, F. [LEA, Department of Electronics, University of Batna 05000 (Algeria)], E-mail: faycaldzdz@hotmail.com; Dibi, Z. [LEA, Department of Electronics, University of Batna 05000 (Algeria)], E-mail: zohirdibi@univ-batna.dz; Hafiane, M.L.; Arar, D. [LEA, Department of Electronics, University of Batna 05000 (Algeria)

    2007-09-15

    The double gate (DG) MOSFET has received great attention in recent years owing to the inherent suppression of short channel effects (SCEs), excellent subthreshold slope (S), improved drive current (I{sub ds}) and transconductance (gm), volume inversion for symmetric devices and excellent scalability. Therefore, simulation tools which can be applied to design nanoscale transistors in the future require new theory and modeling techniques that capture the physics of quantum transport accurately and efficiently. In this sense, this work presents the applicability of the artificial neural networks (ANN) for the design and simulation of a nanoelectronic DG MOSFET current source. The latter is based on the 2D numerical Non-Equilibrium Green's Function (NEGF) simulation of the current-voltage characteristics of an undoped symmetric DG MOSFET. Our results are discussed in order to obtain some new and useful information about the ULSI technology.

  2. Simulation of heart rate variability model in a network

    Science.gov (United States)

    Cascaval, Radu C.; D'Apice, Ciro; D'Arienzo, Maria Pia

    2017-07-01

    We consider a 1-D model for the simulation of the blood flow in the cardiovascular system. As inflow condition we consider a model for the aortic valve. The opening and closing of the valve is dynamically determined by the pressure difference between the left ventricular and aortic pressures. At the outflow we impose a peripheral resistance model. To approximate the solution we use a numerical scheme based on the discontinuous Galerkin method. We also considering a variation in heart rate and terminal reflection coefficient due to monitoring of the pressure in the network.

  3. DC Collection Network Simulation for Offshore Wind Farms

    DEFF Research Database (Denmark)

    Vogel, Stephan; Rasmussen, Tonny Wederberg; El-Khatib, Walid Ziad

    2015-01-01

    The possibility to connect offshore wind turbines with a collection network based on Direct Current (DC), instead of Alternating Current (AC), gained attention in the scientific and industrial environment. There are many promising properties of DC components that could be beneficial such as......: smaller dimensions, less weight, fewer conductors, no reactive power considerations, and less overall losses due to the absence of proximity and skin effects. This work describes a study about the simulation of a Medium Voltage DC (MVDC) grid in an offshore wind farm. Suitable converter concepts...

  4. Attaining Realistic Simulations of Mobile Ad-hoc Networks

    Science.gov (United States)

    2010-06-01

    Lastly every MANET faces higher security risks either through malicious or poorly configured nodes. The fact that MANET traffic is dependent on...are being developed and advertised as secure and reliable but the simulation models are unable to provide an accurate depiction of how the new...use of the Institute of Telematics techniques that alter propagation models within NS-2 and generate the resulting model in LaTeX [20]. These models

  5. Petascale Kinetic Simulations in Space Sciences: New Simulations and Data Discovery Techniques and Physics Results

    Science.gov (United States)

    Karimabadi, Homa

    2012-03-01

    Recent advances in simulation technology and hardware are enabling breakthrough science where many longstanding problems can now be addressed for the first time. In this talk, we focus on kinetic simulations of the Earth's magnetosphere and magnetic reconnection process which is the key mechanism that breaks the protective shield of the Earth's dipole field, allowing the solar wind to enter the Earth's magnetosphere. This leads to the so-called space weather where storms on the Sun can affect space-borne and ground-based technological systems on Earth. The talk will consist of three parts: (a) overview of a new multi-scale simulation technique where each computational grid is updated based on its own unique timestep, (b) Presentation of a new approach to data analysis that we refer to as Physics Mining which entails combining data mining and computer vision algorithms with scientific visualization to extract physics from the resulting massive data sets. (c) Presentation of several recent discoveries in studies of space plasmas including the role of vortex formation and resulting turbulence in magnetized plasmas.

  6. A Comparison of Techniques for Reducing Unicast Traffic in HSR Networks

    Directory of Open Access Journals (Sweden)

    Nguyen Xuan Tien

    2015-10-01

    Full Text Available This paper investigates several existing techniques for reducing high-availability seamless redundancy (HSR unicast traffic in HSR networks for substation automation systems (SAS. HSR is a redundancy protocol for Ethernet networks that provides duplicate frames for separate physical paths with zero recovery time. This feature of HSR makes it very suited for real-time and mission-critical applications such as SAS systems. HSR is one of the redundancy protocols selected for SAS systems. However, the standard HSR protocol generates too much unnecessary redundant unicast traffic in connected-ring networks. This drawback degrades network performance and may cause congestion and delay. Several techniques have been proposed to reduce the redundant unicast traffic, resulting in the improvement of network performance in HSR networks. These HSR traffic reduction techniques are broadly classified into two categories based on their traffic reduction manner, including traffic filtering-based techniques and predefined path-based techniques. In this paper, we provide an overview and comparison of these HSR traffic reduction techniques found in the literature. The concepts, operational principles, network performance, advantages, and disadvantages of these techniques are investigated, summarized. We also provide a comparison of the traffic performance of these HSR traffic reduction techniques.

  7. Memory Compression Techniques for Network Address Management in MPI

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yanfei; Archer, Charles J.; Blocksome, Michael; Parker, Scott; Bland, Wesley; Raffenetti, Ken; Balaji, Pavan

    2017-05-29

    MPI allows applications to treat processes as a logical collection of integer ranks for each MPI communicator, while internally translating these logical ranks into actual network addresses. In current MPI implementations the management and lookup of such network addresses use memory sizes that are proportional to the number of processes in each communicator. In this paper, we propose a new mechanism, called AV-Rankmap, for managing such translation. AV-Rankmap takes advantage of logical patterns in rank-address mapping that most applications naturally tend to have, and it exploits the fact that some parts of network address structures are naturally more performance critical than others. It uses this information to compress the memory used for network address management. We demonstrate that AV-Rankmap can achieve performance similar to or better than that of other MPI implementations while using significantly less memory.

  8. Sybil Defense Techniques in Online Social Networks: A Survey

    National Research Council Canada - National Science Library

    Al-Qurishi, Muhammad; Al-Rakhami, Mabrook; Alamri, Atif; Alrubaian, Majed; Rahman, Sk Md Mizanur; Hossain, M. Shamim

    2017-01-01

    The problem of malicious activities in online social networks, such as Sybil attacks and malevolent use of fake identities, can severely affect the social activities in which users engage while online...

  9. Adverse Outcome Pathway Network Analyses: Techniques and benchmarking the AOPwiki

    Science.gov (United States)

    Abstract: As the community of toxicological researchers, risk assessors, and risk managers adopt the adverse outcome pathway (AOP) paradigm for organizing toxicological knowledge, the number and diversity of adverse outcome pathways and AOP networks are continuing to grow. This ...

  10. Coarse-graining stochastic biochemical networks: adiabaticity and fast simulations

    Energy Technology Data Exchange (ETDEWEB)

    Nemenman, Ilya [Los Alamos National Laboratory; Sinitsyn, Nikolai [Los Alamos National Laboratory; Hengartner, Nick [Los Alamos National Laboratory

    2008-01-01

    We propose a universal approach for analysis and fast simulations of stiff stochastic biochemical kinetics networks, which rests on elimination of fast chemical species without a loss of information about mesoscoplc, non-Poissonian fluctuations of the slow ones. Our approach, which is similar to the Born-Oppenhelmer approximation in quantum mechanics, follows from the stochastic path Integral representation of the cumulant generating function of reaction events. In applications with a small number of chemIcal reactions, It produces analytical expressions for cumulants of chemical fluxes between the slow variables. This allows for a low-dimensional, Interpretable representation and can be used for coarse-grained numerical simulation schemes with a small computational complexity and yet high accuracy. As an example, we derive the coarse-grained description for a chain of biochemical reactions, and show that the coarse-grained and the microscopic simulations are in an agreement, but the coarse-gralned simulations are three orders of magnitude faster.

  11. On Parallelizing Single Dynamic Simulation Using HPC Techniques and APIs of Commercial Software

    Energy Technology Data Exchange (ETDEWEB)

    Diao, Ruisheng; Jin, Shuangshuang; Howell, Frederic; Huang, Zhenyu; Wang, Lei; Wu, Di; Chen, Yousu

    2017-05-01

    Time-domain simulations are heavily used in today’s planning and operation practices to assess power system transient stability and post-transient voltage/frequency profiles following severe contingencies to comply with industry standards. Because of the increased modeling complexity, it is several times slower than real time for state-of-the-art commercial packages to complete a dynamic simulation for a large-scale model. With the growing stochastic behavior introduced by emerging technologies, power industry has seen a growing need for performing security assessment in real time. This paper presents a parallel implementation framework to speed up a single dynamic simulation by leveraging the existing stability model library in commercial tools through their application programming interfaces (APIs). Several high performance computing (HPC) techniques are explored such as parallelizing the calculation of generator current injection, identifying fast linear solvers for network solution, and parallelizing data outputs when interacting with APIs in the commercial package, TSAT. The proposed method has been tested on a WECC planning base case with detailed synchronous generator models and exhibits outstanding scalable performance with sufficient accuracy.

  12. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications

    OpenAIRE

    Sadik Kamel Gharghan; Rosdiadee Nordin; Mahamod Ismail

    2016-01-01

    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the...

  13. Genetic Algorithms in Wireless Networking: Techniques, Applications, and Issues

    OpenAIRE

    Mehboob, Usama; Qadir, Junaid; Ali, Salman; Vasilakos, Athanasios

    2014-01-01

    In recent times, wireless access technology is becoming increasingly commonplace due to the ease of operation and installation of untethered wireless media. The design of wireless networking is challenging due to the highly dynamic environmental condition that makes parameter optimization a complex task. Due to the dynamic, and often unknown, operating conditions, modern wireless networking standards increasingly rely on machine learning and artificial intelligence algorithms. Genetic algorit...

  14. Wireless Power Transfer Protocols in Sensor Networks: Experiments and Simulations

    Directory of Open Access Journals (Sweden)

    Sotiris Nikoletseas

    2017-04-01

    Full Text Available Rapid technological advances in the domain of Wireless Power Transfer pave the way for novel methods for power management in systems of wireless devices, and recent research works have already started considering algorithmic solutions for tackling emerging problems. In this paper, we investigate the problem of efficient and balanced Wireless Power Transfer in Wireless Sensor Networks. We employ wireless chargers that replenish the energy of network nodes. We propose two protocols that configure the activity of the chargers. One protocol performs wireless charging focused on the charging efficiency, while the other aims at proper balance of the chargers’ residual energy. We conduct detailed experiments using real devices and we validate the experimental results via larger scale simulations. We observe that, in both the experimental evaluation and the evaluation through detailed simulations, both protocols achieve their main goals. The Charging Oriented protocol achieves good charging efficiency throughout the experiment, while the Energy Balancing protocol achieves a uniform distribution of energy within the chargers.

  15. Quantum versus simulated annealing in wireless interference network optimization

    Science.gov (United States)

    Wang, Chi; Chen, Huo; Jonckheere, Edmond

    2016-05-01

    Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking—more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed.

  16. Emulation of reionization simulations for Bayesian inference of astrophysics parameters using neural networks

    Science.gov (United States)

    Schmit, C. J.; Pritchard, J. R.

    2018-03-01

    Next generation radio experiments such as LOFAR, HERA, and SKA are expected to probe the Epoch of Reionization (EoR) and claim a first direct detection of the cosmic 21cm signal within the next decade. Data volumes will be enormous and can thus potentially revolutionize our understanding of the early Universe and galaxy formation. However, numerical modelling of the EoR can be prohibitively expensive for Bayesian parameter inference and how to optimally extract information from incoming data is currently unclear. Emulation techniques for fast model evaluations have recently been proposed as a way to bypass costly simulations. We consider the use of artificial neural networks as a blind emulation technique. We study the impact of training duration and training set size on the quality of the network prediction and the resulting best-fitting values of a parameter search. A direct comparison is drawn between our emulation technique and an equivalent analysis using 21CMMC. We find good predictive capabilities of our network using training sets of as low as 100 model evaluations, which is within the capabilities of fully numerical radiative transfer codes.

  17. Validating module network learning algorithms using simulated data.

    Science.gov (United States)

    Michoel, Tom; Maere, Steven; Bonnet, Eric; Joshi, Anagha; Saeys, Yvan; Van den Bulcke, Tim; Van Leemput, Koenraad; van Remortel, Piet; Kuiper, Martin; Marchal, Kathleen; Van de Peer, Yves

    2007-05-03

    In recent years, several authors have used probabilistic graphical models to learn expression modules and their regulatory programs from gene expression data. Despite the demonstrated success of such algorithms in uncovering biologically relevant regulatory relations, further developments in the area are hampered by a lack of tools to compare the performance of alternative module network learning strategies. Here, we demonstrate the use of the synthetic data generator SynTReN for the purpose of testing and comparing module network learning algorithms. We introduce a software package for learning module networks, called LeMoNe, which incorporates a novel strategy for learning regulatory programs. Novelties include the use of a bottom-up Bayesian hierarchical clustering to construct the regulatory programs, and the use of a conditional entropy measure to assign regulators to the regulation program nodes. Using SynTReN data, we test the performance of LeMoNe in a completely controlled situation and assess the effect of the methodological changes we made with respect to an existing software package, namely Genomica. Additionally, we assess the effect of various parameters, such as the size of the data set and the amount of noise, on the inference performance. Overall, application of Genomica and LeMoNe to simulated data sets gave comparable results. However, LeMoNe offers some advantages, one of them being that the learning process is considerably faster for larger data sets. Additionally, we show that the location of the regulators in the LeMoNe regulation programs and their conditional entropy may be used to prioritize regulators for functional validation, and that the combination of the bottom-up clustering strategy with the conditional entropy-based assignment of regulators improves the handling of missing or hidden regulators. We show that data simulators such as SynTReN are very well suited for the purpose of developing, testing and improving module network

  18. Simulation and prediction for energy dissipaters and stilling basins design using artificial intelligence technique

    Directory of Open Access Journals (Sweden)

    Mostafa Ahmed Moawad Abdeen

    2015-12-01

    Full Text Available Water with large velocities can cause considerable damage to channels whose beds are composed of natural earth materials. Several stilling basins and energy dissipating devices have been designed in conjunction with spillways and outlet works to avoid damages in canals’ structures. In addition, lots of experimental and traditional mathematical numerical works have been performed to profoundly investigate the accurate design of these stilling basins and energy dissipaters. The current study is aimed toward introducing the artificial intelligence technique as new modeling tool in the prediction of the accurate design of stilling basins. Specifically, artificial neural networks (ANNs are utilized in the current study in conjunction with experimental data to predict the length of the hydraulic jumps occurred in spillways and consequently the stilling basin dimensions can be designed for adequate energy dissipation. The current study showed, in a detailed fashion, the development process of different ANN models to accurately predict the hydraulic jump lengths acquired from different experimental studies. The results obtained from implementing these models showed that ANN technique was very successful in simulating the hydraulic jump characteristics occurred in stilling basins. Therefore, it can be safely utilized in the design of these basins as ANN involves minimum computational and financial efforts and requirements compared with experimental work and traditional numerical techniques such as finite difference or finite elements.

  19. Network condition simulator for benchmarking sewer deterioration models.

    Science.gov (United States)

    Scheidegger, A; Hug, T; Rieckermann, J; Maurer, M

    2011-10-15

    An accurate description of aging and deterioration of urban drainage systems is necessary for optimal investment and rehabilitation planning. Due to a general lack of suitable datasets, network condition models are rarely validated, and if so with varying levels of success. We therefore propose a novel network condition simulator (NetCoS) that produces a synthetic population of sewer sections with a given condition-class distribution. NetCoS can be used to benchmark deterioration models and guide utilities in the selection of appropriate models and data management strategies. The underlying probabilistic model considers three main processes: a) deterioration, b) replacement policy, and c) expansions of the sewer network. The deterioration model features a semi-Markov chain that uses transition probabilities based on user-defined survival functions. The replacement policy is approximated with a condition-class dependent probability of replacing a sewer pipe. The model then simulates the course of the sewer sections from the installation of the first line to the present, adding new pipes based on the defined replacement and expansion program. We demonstrate the usefulness of NetCoS in two examples where we quantify the influence of incomplete data and inspection frequency on the parameter estimation of a cohort survival model and a Markov deterioration model. Our results show that typical available sewer inventory data with discarded historical data overestimate the average life expectancy by up to 200 years. Although NetCoS cannot prove the validity of a particular deterioration model, it is useful to reveal its possible limitations and shortcomings and quantifies the effects of missing or uncertain data. Future developments should include additional processes, for example to investigate the long-term effect of pipe rehabilitation measures, such as inliners. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Induction of a transient acidosis in the rumen simulation technique.

    Science.gov (United States)

    Eger, M; Riede, S; Breves, G

    2017-03-16

    Feeding high concentrate diets to cattle results in an enhanced production of short-chain fatty acids by the micro-organisms in the rumen. Excessive fermentation might result in subclinical or clinical rumen acidosis, characterized by low pH, alterations in the microbial community and lactate production. Here, we provide an in vitro model of a severe rumen acidosis. A transient acidosis was induced in the rumen simulation technique by lowering bicarbonate, dihydrogen phosphate and hydrogen phosphate concentrations in the artificial saliva while providing a concentrate-to-forage ratio of 70:30. The experiment consisted of an equilibration period of 7 days, a first control period of 5 days, the acidosis period of 5 days and a second control period of 5 days. During acidosis induction, pH decreased stepwise until it ranged below 5.0 at the last day of acidosis (day 17). This was accompanied by an increase in lactate production reaching 11.3 mm at day 17. The daily production of acetate, propionate and butyrate was reduced at the end of the acidosis period. Gas production (methane and carbon dioxide) and NH3 -N concentration reached a minimum 2 days after terminating the acidosis challenge. While the initial pH was already restored 1 day after acidosis, alterations in the mentioned fermentation parameters lasted longer. However, by the end of the experiment, all parameters had recovered. An acidosis-induced alteration in the microbial community of bacteria and archaea was revealed by single-strand conformation polymorphism. For bacteria, the pre-acidotic community could be re-established within 5 days, however, not for archaea. This study provides an in vitro model for a transient rumen acidosis including biochemical and microbial changes, which might be used for testing feeding strategies or feed additives influencing rumen acidosis. Journal of Animal Physiology and Animal Nutrition © 2017 Blackwell Verlag GmbH.

  1. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…

  2. Determine the feasibility of techniques for simulating coal dust explosions

    CSIR Research Space (South Africa)

    Kirsten, JT

    1994-07-01

    Full Text Available The primary objective of this work is to assess the feasibility of reliably simulating the coal dust explosion process taking place in the Kloppersbos tunnel with a computer model. Secondary objectives are to investigate the viability of simulating...

  3. Harmonic Mitigation Techniques Applied to Power Distribution Networks

    Directory of Open Access Journals (Sweden)

    Hussein A. Kazem

    2013-01-01

    Full Text Available A growing number of harmonic mitigation techniques are now available including active and passive methods, and the selection of the best-suited technique for a particular case can be a complicated decision-making process. The performance of some of these techniques is largely dependent on system conditions, while others require extensive system analysis to prevent resonance problems and capacitor failure. A classification of the various available harmonic mitigation techniques is presented in this paper aimed at presenting a review of harmonic mitigation methods to researchers, designers, and engineers dealing with power distribution systems.

  4. Developing Visualization Techniques for Semantics-based Information Networks

    Science.gov (United States)

    Keller, Richard M.; Hall, David R.

    2003-01-01

    Information systems incorporating complex network structured information spaces with a semantic underpinning - such as hypermedia networks, semantic networks, topic maps, and concept maps - are being deployed to solve some of NASA s critical information management problems. This paper describes some of the human interaction and navigation problems associated with complex semantic information spaces and describes a set of new visual interface approaches to address these problems. A key strategy is to leverage semantic knowledge represented within these information spaces to construct abstractions and views that will be meaningful to the human user. Human-computer interaction methodologies will guide the development and evaluation of these approaches, which will benefit deployed NASA systems and also apply to information systems based on the emerging Semantic Web.

  5. Data mining techniques in sensor networks summarization, interpolation and surveillance

    CERN Document Server

    Appice, Annalisa; Fumarola, Fabio; Malerba, Donato

    2013-01-01

    Sensor networks comprise of a number of sensors installed across a spatially distributed network, which gather information and periodically feed a central server with the measured data. The server monitors the data, issues possible alarms and computes fast aggregates. As data analysis requests may concern both present and past data, the server is forced to store the entire stream. But the limited storage capacity of a server may reduce the amount of data stored on the disk. One solution is to compute summaries of the data as it arrives, and to use these summaries to interpolate the real data.

  6. A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2016-07-07

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.

  7. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  8. Neural network simulation of the industrial producer price index dynamical series

    OpenAIRE

    Soshnikov, L. E.

    2013-01-01

    This paper is devoted the simulation and forecast of dynamical series of the economical indicators. Multilayer perceptron and Radial basis function neural networks have been used. The neural networks model results are compared with the econometrical modeling.

  9. Knapsack--TOPSIS Technique for Vertical Handover in Heterogeneous Wireless Network.

    Directory of Open Access Journals (Sweden)

    E M Malathy

    Full Text Available In a heterogeneous wireless network, handover techniques are designed to facilitate anywhere/anytime service continuity for mobile users. Consistent best-possible access to a network with widely varying network characteristics requires seamless mobility management techniques. Hence, the vertical handover process imposes important technical challenges. Handover decisions are triggered for continuous connectivity of mobile terminals. However, bad network selection and overload conditions in the chosen network can cause fallout in the form of handover failure. In order to maintain the required Quality of Service during the handover process, decision algorithms should incorporate intelligent techniques. In this paper, a new and efficient vertical handover mechanism is implemented using a dynamic programming method from the operation research discipline. This dynamic programming approach, which is integrated with the Technique to Order Preference by Similarity to Ideal Solution (TOPSIS method, provides the mobile user with the best handover decisions. Moreover, in this proposed handover algorithm a deterministic approach which divides the network into zones is incorporated into the network server in order to derive an optimal solution. The study revealed that this method is found to achieve better performance and QoS support to users and greatly reduce the handover failures when compared to the traditional TOPSIS method. The decision arrived at the zone gateway using this operational research analytical method (known as the dynamic programming knapsack approach together with Technique to Order Preference by Similarity to Ideal Solution yields remarkably better results in terms of the network performance measures such as throughput and delay.

  10. Knapsack--TOPSIS Technique for Vertical Handover in Heterogeneous Wireless Network.

    Science.gov (United States)

    Malathy, E M; Muthuswamy, Vijayalakshmi

    2015-01-01

    In a heterogeneous wireless network, handover techniques are designed to facilitate anywhere/anytime service continuity for mobile users. Consistent best-possible access to a network with widely varying network characteristics requires seamless mobility management techniques. Hence, the vertical handover process imposes important technical challenges. Handover decisions are triggered for continuous connectivity of mobile terminals. However, bad network selection and overload conditions in the chosen network can cause fallout in the form of handover failure. In order to maintain the required Quality of Service during the handover process, decision algorithms should incorporate intelligent techniques. In this paper, a new and efficient vertical handover mechanism is implemented using a dynamic programming method from the operation research discipline. This dynamic programming approach, which is integrated with the Technique to Order Preference by Similarity to Ideal Solution (TOPSIS) method, provides the mobile user with the best handover decisions. Moreover, in this proposed handover algorithm a deterministic approach which divides the network into zones is incorporated into the network server in order to derive an optimal solution. The study revealed that this method is found to achieve better performance and QoS support to users and greatly reduce the handover failures when compared to the traditional TOPSIS method. The decision arrived at the zone gateway using this operational research analytical method (known as the dynamic programming knapsack approach together with Technique to Order Preference by Similarity to Ideal Solution) yields remarkably better results in terms of the network performance measures such as throughput and delay.

  11. Dynamic Beamforming for Three-Dimensional MIMO Technique in LTE-Advanced Networks

    Directory of Open Access Journals (Sweden)

    Yan Li

    2013-01-01

    Full Text Available MIMO system with large number of antennas, referred to as large MIMO or massive MIMO, has drawn increased attention as they enable significant throughput and coverage improvement in LTE-Advanced networks. However, deploying huge number of antennas in both transmitters and receivers was a great challenge in the past few years. Three-dimensional MIMO (3D MIMO is introduced as a promising technique in massive MIMO networks to enhance the cellular performance by deploying antenna elements in both horizontal and vertical dimensions. Radio propagation of user equipments (UE is considered only in horizontal domain by applying 2D beamforming. In this paper, a dynamic beamforming algorithm is proposed where vertical domain of antenna is fully considered and beamforming vector can be obtained according to UEs’ horizontal and vertical directions. Compared with the conventional 2D beamforming algorithm, throughput of cell edge UEs and cell center UEs can be improved by the proposed algorithm. System level simulation is performed to evaluate the proposed algorithm. In addition, the impacts of downtilt and intersite distance (ISD on spectral efficiency and cell coverage are explored.

  12. TopoGen: A Network Topology Generation Architecture with application to automating simulations of Software Defined Networks

    CERN Document Server

    Laurito, Andres; The ATLAS collaboration

    2017-01-01

    Simulation is an important tool to validate the performance impact of control decisions in Software Defined Networks (SDN). Yet, the manual modeling of complex topologies that may change often during a design process can be a tedious error-prone task. We present TopoGen, a general purpose architecture and tool for systematic translation and generation of network topologies. TopoGen can be used to generate network simulation models automatically by querying information available at diverse sources, notably SDN controllers. The DEVS modeling and simulation framework facilitates a systematic translation of structured knowledge about a network topology into a formal modular and hierarchical coupling of preexisting or new models of network entities (physical or logical). TopoGen can be flexibly extended with new parsers and generators to grow its scope of applicability. This permits to design arbitrary workflows of topology transformations. We tested TopoGen in a network engineering project for the ATLAS detector ...

  13. TopoGen: A Network Topology Generation Architecture with application to automating simulations of Software Defined Networks

    CERN Document Server

    Laurito, Andres; The ATLAS collaboration

    2018-01-01

    Simulation is an important tool to validate the performance impact of control decisions in Software Defined Networks (SDN). Yet, the manual modeling of complex topologies that may change often during a design process can be a tedious error-prone task. We present TopoGen, a general purpose architecture and tool for systematic translation and generation of network topologies. TopoGen can be used to generate network simulation models automatically by querying information available at diverse sources, notably SDN controllers. The DEVS modeling and simulation framework facilitates a systematic translation of structured knowledge about a network topology into a formal modular and hierarchical coupling of preexisting or new models of network entities (physical or logical). TopoGen can be flexibly extended with new parsers and generators to grow its scope of applicability. This permits to design arbitrary workflows of topology transformations. We tested TopoGen in a network engineering project for the ATLAS detector ...

  14. ATLAS trigger simulation with legacy code using virtualization techniques

    CERN Document Server

    Galster, G; The ATLAS collaboration; Wiedenmann, W

    2014-01-01

    Abstract. Several scenarios, both present and future, requires re-simulation of the trigger response in ATLAS. While software for the detector response simulation and event reconstruction is allowed to change and improve, the trigger response simulation has to reflect the conditions at which data was taken. This poses a massive maintenance and data preservation problem. Several strategies have been considered and a proof-of-concept model using CernVM has been developed. While the virtualization with CernVM elegantly solves several aspects of the data preservation problem, the low maturity for contextualization as well as data format incompatibilities in the currently used data format introduces new challenges. In this proceeding these challenges, their current solutions and the proof of concept model for precise trigger simulation are discussed.

  15. 360-degree videos: a new visualization technique for astrophysical simulations

    Science.gov (United States)

    Russell, Christopher M. P.

    2017-11-01

    360-degree videos are a new type of movie that renders over all 4π steradian. Video sharing sites such as YouTube now allow this unique content to be shared via virtual reality (VR) goggles, hand-held smartphones/tablets, and computers. Creating 360° videos from astrophysical simulations is not only a new way to view these simulations as you are immersed in them, but is also a way to create engaging content for outreach to the public. We present what we believe is the first 360° video of an astrophysical simulation: a hydrodynamics calculation of the central parsec of the Galactic centre. We also describe how to create such movies, and briefly comment on what new science can be extracted from astrophysical simulations using 360° videos.

  16. An Initialization Technique for the Waveform-Relaxation Circuit Simulation

    OpenAIRE

    Habib, S. E.-D.; Al-Karim, G. J.

    1999-01-01

    This paper reports the development of the Cairo University Waveform Relaxation (CUWORX) simulator. In order to accelerate the convergence of the waveform relaxation (WR) in the presence of logic feedback, CUWORK is initialized via a logic simulator. This logic initialization scheme is shown to be highly effective for digital synchronous circuits. Additionally, this logic initialization scheme preserves fully the multi-rate properties of the WR algorithm.

  17. Assessment of Software Modeling Techniques for Wireless Sensor Networks: A Survey

    Directory of Open Access Journals (Sweden)

    John Khalil Jacoub

    2012-03-01

    Full Text Available Wireless Sensor Networks (WSNs monitor environment phenomena and in some cases react in response to the observed phenomena. The distributed nature of WSNs and the interaction between software and hardware components makes it difficult to correctly design and develop WSN systems. One solution to the WSN design challenges is system modeling. In this paper we present a survey of 9 WSN modeling techniques and show how each technique models different parts of the system such as sensor behavior, sensor data and hardware. Furthermore, we consider how each modeling technique represents the network behavior and network topology. We also consider the available supporting tools for each of the modeling techniques. Based on the survey, we classify the modeling techniques and derive examples of the surveyed modeling techniques by using SensIV system.

  18. QoS Provisioning Techniques for Future Fiber-Wireless (FiWi Access Networks

    Directory of Open Access Journals (Sweden)

    Martin Maier

    2010-04-01

    Full Text Available A plethora of enabling optical and wireless access-metro network technologies have been emerging that can be used to build future-proof bimodal fiber-wireless (FiWi networks. Hybrid FiWi networks aim at providing wired and wireless quad-play services over the same infrastructure simultaneously and hold great promise to mitigate the digital divide and change the way we live and work by replacing commuting with teleworking. After overviewing enabling optical and wireless network technologies and their QoS provisioning techniques, we elaborate on enabling radio-over-fiber (RoF and radio-and-fiber (R&F technologies. We describe and investigate new QoS provisioning techniques for future FiWi networks, ranging from traffic class mapping, scheduling, and resource management to advanced aggregation techniques, congestion control, and layer-2 path selection algorithms.

  19. Hydrogen adsorption and desorption with 3D silicon nanotube-network and film-network structures: Monte Carlo simulations

    Science.gov (United States)

    Li, Ming; Huang, Xiaobo; Kang, Zhan

    2015-08-01

    Hydrogen is clean, sustainable, and renewable, thus is viewed as promising energy carrier. However, its industrial utilization is greatly hampered by the lack of effective hydrogen storage and release method. Carbon nanotubes (CNTs) were viewed as one of the potential hydrogen containers, but it has been proved that pure CNTs cannot attain the desired target capacity of hydrogen storage. In this paper, we present a numerical study on the material-driven and structure-driven hydrogen adsorption of 3D silicon networks and propose a deformation-driven hydrogen desorption approach based on molecular simulations. Two types of 3D nanostructures, silicon nanotube-network (Si-NN) and silicon film-network (Si-FN), are first investigated in terms of hydrogen adsorption and desorption capacity with grand canonical Monte Carlo simulations. It is revealed that the hydrogen storage capacity is determined by the lithium doping ratio and geometrical parameters, and the maximum hydrogen uptake can be achieved by a 3D nanostructure with optimal configuration and doping ratio obtained through design optimization technique. For hydrogen desorption, a mechanical-deformation-driven-hydrogen-release approach is proposed. Compared with temperature/pressure change-induced hydrogen desorption method, the proposed approach is so effective that nearly complete hydrogen desorption can be achieved by Si-FN nanostructures under sufficient compression but without structural failure observed. The approach is also reversible since the mechanical deformation in Si-FN nanostructures can be elastically recovered, which suggests a good reusability. This study may shed light on the mechanism of hydrogen adsorption and desorption and thus provide useful guidance toward engineering design of microstructural hydrogen (or other gas) adsorption materials.

  20. A novel neural network-based technique for smart gas sensors operating in a dynamic environment.

    Science.gov (United States)

    Baha, Hakim; Dibi, Zohir

    2009-01-01

    Thanks to their high sensitivity and low-cost, metal oxide gas sensors (MOX) are widely used in gas detection, although they present well-known problems (lack of selectivity and environmental effects…). We present in this paper a novel neural network- based technique to remedy these problems. The idea is to create intelligent models; the first one, called corrector, can automatically linearize a sensor's response characteristics and eliminate its dependency on the environmental parameters. The corrector's responses are processed with the second intelligent model which has the role of discriminating exactly the detected gas (nature and concentration). The gas sensors used are industrial resistive kind (TGS8xx, by Figaro Engineering). The MATLAB environment is used during the design phase and optimization. The sensor models, the corrector, and the selective model were implemented and tested in the PSPICE simulator. The sensor model accurately expresses the nonlinear character of the response and the dependence on temperature and relative humidity in addition to their gas nature dependency. The corrector linearizes and compensates the sensor's responses. The method discriminates qualitatively and quantitatively between seven gases. The advantage of the method is that it uses a small representative database so we can easily implement the model in an electrical simulator. This method can be extended to other sensors.

  1. Projecting impacts of climate change on water availability using artificial neural network techniques

    Science.gov (United States)

    Swain, Eric D.; Gomez-Fragoso, Julieta; Torres-Gonzalez, Sigfredo

    2017-01-01

    Lago Loíza reservoir in east-central Puerto Rico is one of the primary sources of public water supply for the San Juan metropolitan area. To evaluate and predict the Lago Loíza water budget, an artificial neural network (ANN) technique is trained to predict river inflows. A method is developed to combine ANN-predicted daily flows with ANN-predicted 30-day cumulative flows to improve flow estimates. The ANN application trains well for representing 2007–2012 and the drier 1994–1997 periods. Rainfall data downscaled from global circulation model (GCM) simulations are used to predict 2050–2055 conditions. Evapotranspiration is estimated with the Hargreaves equation using minimum and maximum air temperatures from the downscaled GCM data. These simulated 2050–2055 river flows are input to a water budget formulation for the Lago Loíza reservoir for comparison with 2007–2012. The ANN scenarios require far less computational effort than a numerical model application, yet produce results with sufficient accuracy to evaluate and compare hydrologic scenarios. This hydrologic tool will be useful for future evaluations of the Lago Loíza reservoir and water supply to the San Juan metropolitan area.

  2. A Novel Neural Network-Based Technique for Smart Gas Sensors Operating in a Dynamic Environment

    Directory of Open Access Journals (Sweden)

    Zohir Dibi

    2009-11-01

    Full Text Available Thanks to their high sensitivity and low-cost, metal oxide gas sensors (MOX are widely used in gas detection, although they present well-known problems (lack of selectivity and environmental effects…. We present in this paper a novel neural network- based technique to remedy these problems. The idea is to create intelligent models; the first one, called corrector, can automatically linearize a sensor’s response characteristics and eliminate its dependency on the environmental parameters. The corrector’s responses are processed with the second intelligent model which has the role of discriminating exactly the detected gas (nature and concentration. The gas sensors used are industrial resistive kind (TGS8xx, by Figaro Engineering. The MATLAB environment is used during the design phase and optimization. The sensor models, the corrector, and the selective model were implemented and tested in the PSPICE simulator. The sensor model accurately expresses the nonlinear character of the response and the dependence on temperature and relative humidity in addition to their gas nature dependency. The corrector linearizes and compensates the sensor’s responses. The method discriminates qualitatively and quantitatively between seven gases. The advantage of the method is that it uses a small representative database so we can easily implement the model in an electrical simulator. This method can be extended to other sensors.

  3. Sensorless Speed/Torque Control of DC Machine Using Artificial Neural Network Technique

    Directory of Open Access Journals (Sweden)

    Rakan Kh. Antar

    2017-12-01

    Full Text Available In this paper, Artificial Neural Network (ANN technique is implemented to improve speed and torque control of a separately excited DC machine drive. The speed and torque sensorless scheme based on ANN is estimated adaptively. The proposed controller is designed to estimate rotor speed and mechanical load torque as a Model Reference Adaptive System (MRAS method for DC machine. The DC drive system consists of four quadrant DC/DC chopper with MOSFET transistors, ANN, logic gates and routing circuits. The DC drive circuit is designed, evaluated and modeled by Matlab/Simulink in the forward and reverse operation modes as a motor and generator, respectively. The DC drive system is simulated at different speed values (±1200 rpm and mechanical torque (±7 N.m in steady state and dynamic conditions. The simulation results illustratethe effectiveness of the proposed controller without speed or torque sensors.

  4. USE OF NEURAL NETWORK SIMULATION TO MONITOR PATIENTS UNDERGOING RADICAL PROSTATECTOMY

    National Research Council Canada - National Science Library

    I. V. Lukyanov; N. A. Demchenko

    2014-01-01

    .... Based on neural network simulation, the Department of Urology, Russian Medical Academy of Postgraduate Education, has developed an accounting prognostic system to monitor the postoperative course...

  5. Simulation and Aerodynamic Analysis of the Flow Around the Sailplane Using CFD Techniques

    Directory of Open Access Journals (Sweden)

    Sebastian Marian ZAHARIA

    2015-12-01

    Full Text Available In this paper, it was described the analysis and simulation process using the CFD technique and the phenomena that shows up in the engineering aero-spatial practice, directing the studies of simulation for the air flows around sailplane. The analysis and aerodynamic simulations using Computational Fluid Dynamics techniques (CFD are well set as instruments in the development process of an aeronautical product. The simulation techniques of fluid flow helps engineers to understand the physical phenomena that take place in the product design since its prototype faze and in the same time allows for the optimization of aeronautical products’ performance concerning certain design criteria.

  6. Green's-function reaction dynamics: a particle-based approach for simulating biochemical networks in time and space.

    Science.gov (United States)

    van Zon, Jeroen S; ten Wolde, Pieter Rein

    2005-12-15

    We have developed a new numerical technique, called Green's-function reaction dynamics (GFRD), that makes it possible to simulate biochemical networks at the particle level and in both time and space. In this scheme, a maximum time step is chosen such that only single particles or pairs of particles have to be considered. For these particles, the Smoluchowski equation can be solved analytically using Green's functions. The main idea of GFRD is to exploit the exact solution of the Smoluchoswki equation to set up an event-driven algorithm, which combines in one step the propagation of the particles in space with the reactions between them. The event-driven nature allows GFRD to make large jumps in time and space when the particles are far apart from each other. Here, we apply the technique to a simple model of gene expression. The simulations reveal that spatial fluctuations can be a major source of noise in biochemical networks. The calculations also show that GFRD is highly efficient. Under biologically relevant conditions, GFRD is up to five orders of magnitude faster than conventional particle-based techniques for simulating biochemical networks in time and space. GFRD is not limited to biochemical networks. It can also be applied to a large number of other reaction-diffusion problems.

  7. Evaluation of Techniques to Detect Significant Network Performance Problems using End-to-End Active Network Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Cottrell, R.Les; Logg, Connie; Chhaparia, Mahesh; /SLAC; Grigoriev, Maxim; /Fermilab; Haro, Felipe; /Chile U., Catolica; Nazir, Fawad; /NUST, Rawalpindi; Sandford, Mark

    2006-01-25

    End-to-End fault and performance problems detection in wide area production networks is becoming increasingly hard as the complexity of the paths, the diversity of the performance, and dependency on the network increase. Several monitoring infrastructures are built to monitor different network metrics and collect monitoring information from thousands of hosts around the globe. Typically there are hundreds to thousands of time-series plots of network metrics which need to be looked at to identify network performance problems or anomalous variations in the traffic. Furthermore, most commercial products rely on a comparison with user configured static thresholds and often require access to SNMP-MIB information, to which a typical end-user does not usually have access. In our paper we propose new techniques to detect network performance problems proactively in close to realtime and we do not rely on static thresholds and SNMP-MIB information. We describe and compare the use of several different algorithms that we have implemented to detect persistent network problems using anomalous variations analysis in real end-to-end Internet performance measurements. We also provide methods and/or guidance for how to set the user settable parameters. The measurements are based on active probes running on 40 production network paths with bottlenecks varying from 0.5Mbits/s to 1000Mbit/s. For well behaved data (no missed measurements and no very large outliers) with small seasonal changes most algorithms identify similar events. We compare the algorithms' robustness with respect to false positives and missed events especially when there are large seasonal effects in the data. Our proposed techniques cover a wide variety of network paths and traffic patterns. We also discuss the applicability of the algorithms in terms of their intuitiveness, their speed of execution as implemented, and areas of applicability. Our encouraging results compare and evaluate the accuracy of our

  8. First Principles Neural Network Potentials for Reactive Simulations of Large Molecular and Condensed Systems.

    Science.gov (United States)

    Behler, Jörg

    2017-10-09

    Modern simulation techniques have reached a level of maturity which allows a wide range of problems in chemistry and materials science to be addressed. Unfortunately, the application of first principles methods with predictive power is still limited to rather small systems, and despite the rapid evolution of computer hardware no fundamental change in this situation can be expected. Consequently, the development of more efficient but equally reliable atomistic potentials to reach an atomic level understanding of complex systems has received considerable attention in recent years. A promising new development has been the introduction of machine learning (ML) methods to describe the atomic interactions. Once trained with electronic structure data, ML potentials can accelerate computer simulations by several orders of magnitude, while preserving quantum mechanical accuracy. This Review considers the methodology of an important class of ML potentials that employs artificial neural networks. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Review Of Prevention Techniques For Denial Of Service DOS Attacks In Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Poonam Rolla

    2015-08-01

    Full Text Available Wireless Sensor Networks comprised of several tiny sensor nodes which are densely deployed over the region to monitor the environmental conditions. These sensor nodes have certain design issues out of which security is the main predominant factor as it effects the whole lifetime of network. DDoS Distributed denial of service attack floods unnecessary packets in the sensor network. A review on DDoS attacks and their prevention techniques have been done in this paper.

  10. Review Of Prevention Techniques For Denial Of Service DOS Attacks In Wireless Sensor Network

    OpenAIRE

    Poonam Rolla; Manpreet Kaur

    2015-01-01

    Wireless Sensor Networks comprised of several tiny sensor nodes which are densely deployed over the region to monitor the environmental conditions. These sensor nodes have certain design issues out of which security is the main predominant factor as it effects the whole lifetime of network. DDoS Distributed denial of service attack floods unnecessary packets in the sensor network. A review on DDoS attacks and their prevention techniques have been done in this paper.

  11. A new application of neural network technique to sensorless speed identification of induction motor

    OpenAIRE

    Mostefai, Mohamed; Miloud, Yahia; Abdullah MILOUDI

    2016-01-01

    A new application of neural network technique to sensorless speed identification of scalar-controlled induction motor is implemented in this paper. The neural network estimates the rotor speed through stator measurements and nominal settings of the motor. By changing the motor parameters, the neural network can estimate the speed of another motor. We evaluated our approach based on the speed response and load disturbance effects on two different motors. The test results demonstrate the feasib...

  12. A new application of neural network technique to sensorless speed identification of induction motor

    Directory of Open Access Journals (Sweden)

    Mohamed MOSTEFAI

    2016-12-01

    Full Text Available A new application of neural network technique to sensorless speed identification of scalar-controlled induction motor is implemented in this paper. The neural network estimates the rotor speed through stator measurements and nominal settings of the motor. By changing the motor parameters, the neural network can estimate the speed of another motor. We evaluated our approach based on the speed response and load disturbance effects on two different motors. The test results demonstrate the feasibility of the method.

  13. A signal combining technique based on channel shortening for cooperative sensor networks

    KAUST Repository

    Hussain, Syed Imtiaz

    2010-06-01

    The cooperative relaying process needs proper coordination among the communicating and the relaying nodes. This coordination and the required capabilities may not be available in some wireless systems, e.g. wireless sensor networks where the nodes are equipped with very basic communication hardware. In this paper, we consider a scenario where the source node transmits its signal to the destination through multiple relays in an uncoordinated fashion. The destination can capture the multiple copies of the transmitted signal through a Rake receiver. We analyze a situation where the number of Rake fingers N is less than that of the relaying nodes L. In this case, the receiver can combine N strongest signals out of L. The remaining signals will be lost and act as interference to the desired signal components. To tackle this problem, we develop a novel signal combining technique based on channel shortening. This technique proposes a processing block before the Rake reception which compresses the energy of L signal components over N branches while keeping the noise level at its minimum. The proposed scheme saves the system resources and makes the received signal compatible to the available hardware. Simulation results show that it outperforms the selection combining scheme. ©2010 IEEE.

  14. Fault Diagnosis and Detection in Industrial Motor Network Environment Using Knowledge-Level Modelling Technique

    Directory of Open Access Journals (Sweden)

    Saud Altaf

    2017-01-01

    Full Text Available In this paper, broken rotor bar (BRB fault is investigated by utilizing the Motor Current Signature Analysis (MCSA method. In industrial environment, induction motor is very symmetrical, and it may have obvious electrical signal components at different fault frequencies due to their manufacturing errors, inappropriate motor installation, and other influencing factors. The misalignment experiments revealed that improper motor installation could lead to an unexpected frequency peak, which will affect the motor fault diagnosis process. Furthermore, manufacturing and operating noisy environment could also disturb the motor fault diagnosis process. This paper presents efficient supervised Artificial Neural Network (ANN learning technique that is able to identify fault type when situation of diagnosis is uncertain. Significant features are taken out from the electric current which are based on the different frequency points and associated amplitude values with fault type. The simulation results showed that the proposed technique was able to diagnose the target fault type. The ANN architecture worked well with selecting of significant number of feature data sets. It seemed that, to the results, accuracy in fault detection with features vector has been achieved through classification performance and confusion error percentage is acceptable between healthy and faulty condition of motor.

  15. Auditing information structures in organizations: A review of data collection techniques for network analysis

    NARCIS (Netherlands)

    Koning, K.H.; de Jong, Menno D.T.

    2005-01-01

    Network analysis is one of the current techniques for investigating organizational communication. Despite the amount of how-to literature about using network analysis to assess information flows and relationships in organizations, little is known about the methodological strengths and weaknesses of

  16. Social Learning Network Analysis Model to Identify Learning Patterns Using Ontology Clustering Techniques and Meaningful Learning

    Science.gov (United States)

    Firdausiah Mansur, Andi Besse; Yusof, Norazah

    2013-01-01

    Clustering on Social Learning Network still not explored widely, especially when the network focuses on e-learning system. Any conventional methods are not really suitable for the e-learning data. SNA requires content analysis, which involves human intervention and need to be carried out manually. Some of the previous clustering techniques need…

  17. Model and simulation of Krause model in dynamic open network

    Science.gov (United States)

    Zhu, Meixia; Xie, Guangqiang

    2017-08-01

    The construction of the concept of evolution is an effective way to reveal the formation of group consensus. This study is based on the modeling paradigm of the HK model (Hegsekmann-Krause). This paper analyzes the evolution of multi - agent opinion in dynamic open networks with member mobility. The results of the simulation show that when the number of agents is constant, the interval distribution of the initial distribution will affect the number of the final view, The greater the distribution of opinions, the more the number of views formed eventually; The trust threshold has a decisive effect on the number of views, and there is a negative correlation between the trust threshold and the number of opinions clusters. The higher the connectivity of the initial activity group, the more easily the subjective opinion in the evolution of opinion to achieve rapid convergence. The more open the network is more conducive to the unity of view, increase and reduce the number of agents will not affect the consistency of the group effect, but not conducive to stability.

  18. Novel anti-jamming technique for OCDMA network through FWM in SOA based wavelength converter

    Science.gov (United States)

    Jyoti, Vishav; Kaler, R. S.

    2013-06-01

    In this paper, we propose a novel anti-jamming technique for optical code division multiple access (OCDMA) network through four wave mixing (FWM) in semiconductor optical amplifier (SOA) based wavelength converter. OCDMA signal can be easily jammed with high power jamming signal. It is shown that wavelength conversion through four wave mixing in SOA has improved capability of jamming resistance. It is observed that jammer has no effect on OCDMA network even at high jamming powers by using the proposed technique.

  19. Hybrid artificial neural network genetic algorithm technique for modeling chemical oxygen demand removal in anoxic/oxic process.

    Science.gov (United States)

    Ma, Yongwen; Huang, Mingzhi; Wan, Jinquan; Hu, Kang; Wang, Yan; Zhang, Huiping

    2011-01-01

    In this paper, a hybrid artificial neural network (ANN) - genetic algorithm (GA) numerical technique was successfully developed to deal with complicated problems that cannot be solved by conventional solutions. ANNs and Gas were used to model and simulate the process of removing chemical oxygen demand (COD) in an anoxic/oxic system. The minimization of the error function with respect to the network parameters (weights and biases) has been considered as training of the network. Real-coded genetic algorithm was used to train the network in an unsupervised manner. Meanwhile the important process parameters, such as the influent COD (COD(in)), reflux ratio (R(r)), carbon-nitrogen ratio (C/N) and the effluent COD (COD(out)) were considered. The result shows that compared with the performance of ANN model, the performance of the GA-ANN (genetic algorithm - artificial neural network) network was found to be more impressive. Using ANN, the mean absolute percentage error (MAPE), mean squared error (MSE) and correlation coefficient (R) were 9.33×10(-4), 2.82 and 0.98596, respectively; while for the GA-ANN, they were converged to be 4.18×10(-4), 1.12 and 0.99476, respectively.

  20. A Network Traffic Generator Model for Fast Network-on-Chip Simulation

    DEFF Research Database (Denmark)

    Mahadevan, Shankar; Angiolini, Frederico; Storgaard, Michael

    2005-01-01

    and effective Network-on-Chip (NoC) development and debugging environment. By capturing the type and the timestamp of communication events at the boundary of an IP core in a reference environment, the TG can subsequently emulate the core's communication behavior in different environments. Access patterns......For Systems-on-Chip (SoCs) development, a predominant part of the design time is the simulation time. Performance evaluation and design space exploration of such systems in bit- and cycle-true fashion is becoming prohibitive. We propose a traffic generation (TG) model that provides a fast...

  1. Network Traffic Generator Model for Fast Network-on-Chip Simulation

    DEFF Research Database (Denmark)

    Mahadevan, Shankar; Ang, Frederico; Olsen, Rasmus G.

    2008-01-01

    and effective Network-on-Chip (NoC) development and debugging environment. By capturing the type and the timestamp of communication events at the boundary of an IP core in a reference environment, the TG can subsequently emulate the core's communication behavior in different environments. Access patterns......For Systems-on-Chip (SoCs) development, a predominant part of the design time is the simulation time. Performance evaluation and design space exploration of such systems in bit- and cycle-true fashion is becoming prohibitive. We propose a traffic generation (TG) model that provides a fast...

  2. ChemChains: a platform for simulation and analysis of biochemical networks aimed to laboratory scientists

    Science.gov (United States)

    Helikar, Tomáš; Rogers, Jim A

    2009-01-01

    Background New mathematical models of complex biological structures and computer simulation software allow modelers to simulate and analyze biochemical systems in silico and form mathematical predictions. Due to this potential predictive ability, the use of these models and software has the possibility to compliment laboratory investigations and help refine, or even develop, new hypotheses. However, the existing mathematical modeling techniques and simulation tools are often difficult to use by laboratory biologists without training in high-level mathematics, limiting their use to trained modelers. Results We have developed a Boolean network-based simulation and analysis software tool, ChemChains, which combines the advantages of the parameter-free nature of logical models while providing the ability for users to interact with their models in a continuous manner, similar to the way laboratory biologists interact with laboratory data. ChemChains allows users to simulate models in an automatic fashion under tens of thousands of different external environments, as well as perform various mutational studies. Conclusion ChemChains combines the advantages of logical and continuous modeling and provides a way for laboratory biologists to perform in silico experiments on mathematical models easily, a necessary component of laboratory research in the systems biology era. PMID:19500393

  3. ChemChains: a platform for simulation and analysis of biochemical networks aimed to laboratory scientists

    Directory of Open Access Journals (Sweden)

    Rogers Jim A

    2009-06-01

    Full Text Available Abstract Background New mathematical models of complex biological structures and computer simulation software allow modelers to simulate and analyze biochemical systems in silico and form mathematical predictions. Due to this potential predictive ability, the use of these models and software has the possibility to compliment laboratory investigations and help refine, or even develop, new hypotheses. However, the existing mathematical modeling techniques and simulation tools are often difficult to use by laboratory biologists without training in high-level mathematics, limiting their use to trained modelers. Results We have developed a Boolean network-based simulation and analysis software tool, ChemChains, which combines the advantages of the parameter-free nature of logical models while providing the ability for users to interact with their models in a continuous manner, similar to the way laboratory biologists interact with laboratory data. ChemChains allows users to simulate models in an automatic fashion under tens of thousands of different external environments, as well as perform various mutational studies. Conclusion ChemChains combines the advantages of logical and continuous modeling and provides a way for laboratory biologists to perform in silico experiments on mathematical models easily, a necessary component of laboratory research in the systems biology era.

  4. Simulation technologies in networking and communications selecting the best tool for the test

    CERN Document Server

    Pathan, Al-Sakib Khan; Khan, Shafiullah

    2014-01-01

    Simulation is a widely used mechanism for validating the theoretical models of networking and communication systems. Although the claims made based on simulations are considered to be reliable, how reliable they really are is best determined with real-world implementation trials.Simulation Technologies in Networking and Communications: Selecting the Best Tool for the Test addresses the spectrum of issues regarding the different mechanisms related to simulation technologies in networking and communications fields. Focusing on the practice of simulation testing instead of the theory, it presents

  5. Using elements of game engine architecture to simulate sensor networks for eldercare.

    Science.gov (United States)

    Godsey, Chad; Skubic, Marjorie

    2009-01-01

    When dealing with a real time sensor network, building test data with a known ground truth is a tedious and cumbersome task. In order to quickly build test data for such a network, a simulation solution is a viable option. Simulation environments have a close relationship with computer game environments, and therefore there is much to be learned from game engine design. In this paper, we present our vision for a simulated in-home sensor network and describe ongoing work on using elements of game engines for building the simulator. Validation results are included to show agreement on motion sensor simulation with the physical environment.

  6. statnet: Software Tools for the Representation, Visualization, Analysis and Simulation of Network Data

    Directory of Open Access Journals (Sweden)

    Mark S. Handcock

    2007-12-01

    Full Text Available statnet is a suite of software packages for statistical network analysis. The packages implement recent advances in network modeling based on exponential-family random graph models (ERGM. The components of the package provide a comprehensive framework for ERGM-based network modeling, including tools for model estimation, model evaluation, model-based network simulation, and network visualization. This broad functionality is powered by a central Markov chain Monte Carlo (MCMC algorithm. The coding is optimized for speed and robustness.

  7. Simulating tidal turbines with multi-scale mesh optimisation techniques

    NARCIS (Netherlands)

    Abolghasemi, M.A.; Piggott, M.D.; Spinneken, J; Viré, A.C.; Cotter, CJ; Crammond, S.

    2016-01-01

    Embedding tidal turbines within simulations of realistic large-scale tidal flows is a highly multi-scale problem that poses significant computational challenges. Here this problem is tackled using actuator disc momentum (ADM) theory and Reynolds-averaged Navier–Stokes (RANS) with, for the first

  8. Enhanced sampling techniques in molecular dynamics simulations of biological systems.

    Science.gov (United States)

    Bernardi, Rafael C; Melo, Marcelo C R; Schulten, Klaus

    2015-05-01

    Molecular dynamics has emerged as an important research methodology covering systems to the level of millions of atoms. However, insufficient sampling often limits its application. The limitation is due to rough energy landscapes, with many local minima separated by high-energy barriers, which govern the biomolecular motion. In the past few decades methods have been developed that address the sampling problem, such as replica-exchange molecular dynamics, metadynamics and simulated annealing. Here we present an overview over theses sampling methods in an attempt to shed light on which should be selected depending on the type of system property studied. Enhanced sampling methods have been employed for a broad range of biological systems and the choice of a suitable method is connected to biological and physical characteristics of the system, in particular system size. While metadynamics and replica-exchange molecular dynamics are the most adopted sampling methods to study biomolecular dynamics, simulated annealing is well suited to characterize very flexible systems. The use of annealing methods for a long time was restricted to simulation of small proteins; however, a variant of the method, generalized simulated annealing, can be employed at a relatively low computational cost to large macromolecular complexes. Molecular dynamics trajectories frequently do not reach all relevant conformational substates, for example those connected with biological function, a problem that can be addressed by employing enhanced sampling algorithms. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Designing laboratory wind simulations using artificial neural networks

    Science.gov (United States)

    Križan, Josip; Gašparac, Goran; Kozmar, Hrvoje; Antonić, Oleg; Grisogono, Branko

    2015-05-01

    While experiments in boundary layer wind tunnels remain to be a major research tool in wind engineering and environmental aerodynamics, designing the modeling hardware required for a proper atmospheric boundary layer (ABL) simulation can be costly and time consuming. Hence, possibilities are sought to speed-up this process and make it more time-efficient. In this study, two artificial neural networks (ANNs) are developed to determine an optimal design of the Counihan hardware, i.e., castellated barrier wall, vortex generators, and surface roughness, in order to simulate the ABL flow developing above urban, suburban, and rural terrains, as previous ANN models were created for one terrain type only. A standard procedure is used in developing those two ANNs in order to further enhance best-practice possibilities rather than to improve existing ANN designing methodology. In total, experimental results obtained using 23 different hardware setups are used when creating ANNs. In those tests, basic barrier height, barrier castellation height, spacing density, and height of surface roughness elements are the parameters that were varied to create satisfactory ABL simulations. The first ANN was used for the estimation of mean wind velocity, turbulent Reynolds stress, turbulence intensity, and length scales, while the second one was used for the estimation of the power spectral density of velocity fluctuations. This extensive set of studied flow and turbulence parameters is unmatched in comparison to the previous relevant studies, as it includes here turbulence intensity and power spectral density of velocity fluctuations in all three directions, as well as the Reynolds stress profiles and turbulence length scales. Modeling results agree well with experiments for all terrain types, particularly in the lower ABL within the height range of the most engineering structures, while exhibiting sensitivity to abrupt changes and data scattering in profiles of wind-tunnel results. The

  10. A Bloom Filter-Powered Technique Supporting Scalable Semantic Discovery in Data Service Networks

    Science.gov (United States)

    Zhang, J.; Shi, R.; Bao, Q.; Lee, T. J.; Ramachandran, R.

    2016-12-01

    More and more Earth data analytics software products are published onto the Internet as a service, in the format of either heavyweight WSDL service or lightweight RESTful API. Such reusable data analytics services form a data service network, which allows Earth scientists to compose (mashup) services into value-added ones. Therefore, it is important to have a technique that is capable of helping Earth scientists quickly identify appropriate candidate datasets and services in the global data service network. Most existing services discovery techniques, however, mainly rely on syntax or semantics-based service matchmaking between service requests and available services. Since the scale of the data service network is increasing rapidly, the run-time computational cost will soon become a bottleneck. To address this issue, this project presents a way of applying network routing mechanism to facilitate data service discovery in a service network, featuring scalability and performance. Earth data services are automatically annotated in Web Ontology Language for Services (OWL-S) based on their metadata, semantic information, and usage history. Deterministic Annealing (DA) technique is applied to dynamically organize annotated data services into a hierarchical network, where virtual routers are created to represent semantic local network featuring leading terms. Afterwards Bloom Filters are generated over virtual routers. A data service search request is transformed into a network routing problem in order to quickly locate candidate services through network hierarchy. A neural network-powered technique is applied to assure network address encoding and routing performance. A series of empirical study has been conducted to evaluate the applicability and effectiveness of the proposed approach.

  11. The Virtual Brain: a simulator of primate brain network dynamics.

    Science.gov (United States)

    Sanz Leon, Paula; Knock, Stuart A; Woodman, M Marmaduke; Domide, Lia; Mersmann, Jochen; McIntosh, Anthony R; Jirsa, Viktor

    2013-01-01

    We present The Virtual Brain (TVB), a neuroinformatics platform for full brain network simulations using biologically realistic connectivity. This simulation environment enables the model-based inference of neurophysiological mechanisms across different brain scales that underlie the generation of macroscopic neuroimaging signals including functional MRI (fMRI), EEG and MEG. Researchers from different backgrounds can benefit from an integrative software platform including a supporting framework for data management (generation, organization, storage, integration and sharing) and a simulation core written in Python. TVB allows the reproduction and evaluation of personalized configurations of the brain by using individual subject data. This personalization facilitates an exploration of the consequences of pathological changes in the system, permitting to investigate potential ways to counteract such unfavorable processes. The architecture of TVB supports interaction with MATLAB packages, for example, the well known Brain Connectivity Toolbox. TVB can be used in a client-server configuration, such that it can be remotely accessed through the Internet thanks to its web-based HTML5, JS, and WebGL graphical user interface. TVB is also accessible as a standalone cross-platform Python library and application, and users can interact with the scientific core through the scripting interface IDLE, enabling easy modeling, development and debugging of the scientific kernel. This second interface makes TVB extensible by combining it with other libraries and modules developed by the Python scientific community. In this article, we describe the theoretical background and foundations that led to the development of TVB, the architecture and features of its major software components as well as potential neuroscience applications.

  12. Estimation of fracture aperture using simulation technique; Simulation wo mochiita fracture kaiko haba no suitei

    Energy Technology Data Exchange (ETDEWEB)

    Kikuchi, T. [Geological Survey of Japan, Tsukuba (Japan); Abe, M. [Tohoku University, Sendai (Japan). Faculty of Engineering

    1996-10-01

    Characteristics of amplitude variation around fractures have been investigated using simulation technique in the case changing the fracture aperture. Four models were used. The model-1 was a fracture model having a horizontal fracture at Z=0. For the model-2, the fracture was replaced by a group of small fractures. The model-3 had an extended borehole diameter at Z=0 in a shape of wedge. The model-4 had a low velocity layer at Z=0. The maximum amplitude was compared each other for each depth and for each model. For the model-1, the amplitude became larger at the depth of the fracture, and became smaller above the fracture. For the model-2, when the cross width D increased to 4 cm, the amplitude approached to that of the model-1. For the model-3 having extended borehole diameter, when the extension of borehole diameter ranged between 1 cm and 2 cm, the change of amplitude was hardly observed above and below the fracture. However, when the extension of borehole diameter was 4 cm, the amplitude became smaller above the extension part of borehole. 3 refs., 4 figs., 1 tab.

  13. Assembly line balancing using simulation technique in a garment ...

    African Journals Online (AJOL)

    The typical problems facing with garment manufacturing are: short product cycle for fashion articles, long production lead time, bottlenecking, and low productivity. To alleviate the problems, different types of line balancing techniques have been used for many years in the garment industry. However, garment industries ...

  14. Application of the numerical modelling techniques to the simulation ...

    African Journals Online (AJOL)

    The aquifer was modelled by the application of Finite Element Method (F.E.M), with appropriate initial and boundary conditions. The matrix solver technique adopted for the F.E.M. was that of the Conjugate Gradient Method. After the steady state calibration and transient verification, the model was used to predict the effect of ...

  15. Skeletal response to simulated weightlessness - A comparison of suspension techniques

    Science.gov (United States)

    Wronski, T. J.; Morey-Holton, E. R.

    1987-01-01

    Comparisons are made of the skeletal response of rats subjected to simulated weightlessness by back or tail suspension. In comparison to pair-fed control rats, back-suspended rats failed to gain weight whereas tail-suspended rats exhibited normal weight gain. Quantitative bone histomorphometry revealed marked skeletal abnormalities in the proximal tibial metaphysis of back-suspended rats. Loss of trabecular bone mass in these animals was due to a combination of depressed longitudinal bone growth, decreased bone formation, and increased bone resorption. In contrast, the proximal tibia of tail-suspended rats was relatively normal by these histologic criteria. However, a significant reduction trabecular bone volume occurred during 2 weeks of tail suspension, possibly due to a transient inhibition of bone formation. The findings indicate that tail suspension may be a more appropriate model for evaluating the effects of simulated weightlessness on skeletal homeostasis.

  16. Measurement and Simulation Techniques For Piezoresistive Microcantilever Biosensor Applications

    Directory of Open Access Journals (Sweden)

    Aan Febriansyah

    2012-12-01

    Full Text Available Applications of microcantilevers as biosensors have been explored by many researchers for the applications in medicine, biological, chemistry, and environmental monitoring. This research discusses a design of measurement method and simuations for piezoresistive microcantilever as a biosensor, which consist of designing Wheatstone bridge circuit as object detector, simulation of resonance frequency shift based on Euler Bernoulli Beam equation, and microcantilever vibration simulation using COMSOL Multiphysics 3.5. The piezoresistive microcantilever used here is Seiko Instrument Technology (Japan product with length of 110 ?m, width of 50 ?m, and thickness of 1 ?m. Microcantilever mass is 12.815 ng, including the mass receptor. The sample object in this research is bacteria EColi. One bacteria mass is assumed to 0.3 pg. Simulation results show that the mass of one bacterium will cause the deflection of 0,03053 nm and resonance frequency value of 118,90 kHz. Moreover, four bacterium will cause the deflection of 0,03054 nm and resonance frequency value of 118,68 kHz. These datas indicate that the increasing of the bacteria mass increases the deflection value and reduces the value of resonance frequency.

  17. Assessing the Impacts of Future Climate Change on Protected Area Networks: A Method to Simulate Individual Species' Responses

    DEFF Research Database (Denmark)

    Willis, Stephen; Hole, Dave; Collingham, Yvonne

    2009-01-01

    . In this article, we provide a method to simulate the occurrence of species of conservation concern in protected areas, which could be used as a first-step approach to assess the potential impacts of climate change upon such species in protected areas. We use species-climate response surface models to relate...... technique provides good simulations of current species' occurrence in protected areas. We then use basic habitat data for IBAs along with habitat preference data for the species to reduce over-prediction and further improve predictive ability. This approach can be used with future climate change scenarios...... Area (IBA) network is a series of sites designed to conserve avian diversity in the face of current threats from factors such as habitat loss and fragmentation. However, in common with other networks, the IBA network is based on the assumption that the climate will remain unchanged in the future...

  18. Cognition-Enabling Techniques in Heterogeneous and Flexgrid Optical Communication Networks

    DEFF Research Database (Denmark)

    Tafur Monroy, Idelfonso; Caballero Jambrina, Antonio; Saldaña Cercos, Silvia

    2012-01-01

    High degree of heterogeneity of future optical networks, such as services with different quality-of-transmission requirements, modulation formats and switching techniques, will pose a challenge for the control and optimization of different parameters. Incorporation of cognitive techniques can hel...

  19. Simulating New Drop Test Vehicles and Test Techniques for the Orion CEV Parachute Assembly System

    Science.gov (United States)

    Morris, Aaron L.; Fraire, Usbaldo, Jr.; Bledsoe, Kristin J.; Ray, Eric; Moore, Jim W.; Olson, Leah M.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is engaged in a multi-year design and test campaign to qualify a parachute recovery system for human use on the Orion Spacecraft. Test and simulation techniques have evolved concurrently to keep up with the demands of a challenging and complex system. The primary simulations used for preflight predictions and post-test data reconstructions are Decelerator System Simulation (DSS), Decelerator System Simulation Application (DSSA), and Drop Test Vehicle Simulation (DTV-SIM). The goal of this paper is to provide a roadmap to future programs on the test technique challenges and obstacles involved in executing a large-scale, multi-year parachute test program. A focus on flight simulation modeling and correlation to test techniques executed to obtain parachute performance parameters are presented.

  20. FEM Techniques for High Stress Detection in Accelerated Fatigue Simulation

    Science.gov (United States)

    Veltri, M.

    2016-09-01

    This work presents the theory and a numerical validation study in support to a novel method for a priori identification of fatigue critical regions, with the aim to accelerate durability design in large FEM problems. The investigation is placed in the context of modern full-body structural durability analysis, where a computationally intensive dynamic solution could be required to identify areas with potential for fatigue damage initiation. The early detection of fatigue critical areas can drive a simplification of the problem size, leading to sensible improvement in solution time and model handling while allowing processing of the critical areas in higher detail. The proposed technique is applied to a real life industrial case in a comparative assessment with established practices. Synthetic damage prediction quantification and visualization techniques allow for a quick and efficient comparison between methods, outlining potential application benefits and boundaries.

  1. Simulation Neurotechnologies for Advancing Brain Research: Parallelizing Large Networks in NEURON.

    Science.gov (United States)

    Lytton, William W; Seidenstein, Alexandra H; Dura-Bernal, Salvador; McDougal, Robert A; Schürmann, Felix; Hines, Michael L

    2016-10-01

    Large multiscale neuronal network simulations are of increasing value as more big data are gathered about brain wiring and organization under the auspices of a current major research initiative, such as Brain Research through Advancing Innovative Neurotechnologies. The development of these models requires new simulation technologies. We describe here the current use of the NEURON simulator with message passing interface (MPI) for simulation in the domain of moderately large networks on commonly available high-performance computers (HPCs). We discuss the basic layout of such simulations, including the methods of simulation setup, the run-time spike-passing paradigm, and postsimulation data storage and data management approaches. Using the Neuroscience Gateway, a portal for computational neuroscience that provides access to large HPCs, we benchmark simulations of neuronal networks of different sizes (500-100,000 cells), and using different numbers of nodes (1-256). We compare three types of networks, composed of either Izhikevich integrate-and-fire neurons (I&F), single-compartment Hodgkin-Huxley (HH) cells, or a hybrid network with half of each. Results show simulation run time increased approximately linearly with network size and decreased almost linearly with the number of nodes. Networks with I&F neurons were faster than HH networks, although differences were small since all tested cells were point neurons with a single compartment.

  2. Discrimination of Cylinders with Different Wall Thicknesses using Neural Networks and Simulated Dolphin Sonar Signals

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Au, Whitlow; Larsen, Jan

    1999-01-01

    This paper describes a method integrating neural networks into a system for recognizing underwater objects. The system is based on a combination of simulated dolphin sonar signals, simulated auditory filters and artificial neural networks. The system is tested on a cylinder wall thickness...

  3. Computing distance-based topological descriptors of complex chemical networks: New theoretical techniques

    Science.gov (United States)

    Hayat, Sakander

    2017-11-01

    Structure-based topological descriptors/indices of complex chemical networks enable prediction of physico-chemical properties and the bioactivities of these compounds through QSAR/QSPR methods. In this paper, we have developed a rigorous computational and theoretical technique to compute various distance-based topological indices of complex chemical networks. A fullerene is called the IPR (Isolated-Pentagon-Rule) fullerene, if every pentagon in it is surrounded by hexagons only. To ensure the applicability of our technique, we compute certain distance-based indices of an infinite family of IPR fullerenes. Our results show that the proposed technique is more diverse and bears less algorithmic and combinatorial complexity.

  4. Novel Machine Learning-Based Techniques for Efficient Resource Allocation in Next Generation Wireless Networks

    KAUST Repository

    AlQuerm, Ismail A.

    2018-02-21

    There is a large demand for applications of high data rates in wireless networks. These networks are becoming more complex and challenging to manage due to the heterogeneity of users and applications specifically in sophisticated networks such as the upcoming 5G. Energy efficiency in the future 5G network is one of the essential problems that needs consideration due to the interference and heterogeneity of the network topology. Smart resource allocation, environmental adaptivity, user-awareness and energy efficiency are essential features in the future networks. It is important to support these features at different networks topologies with various applications. Cognitive radio has been found to be the paradigm that is able to satisfy the above requirements. It is a very interdisciplinary topic that incorporates flexible system architectures, machine learning, context awareness and cooperative networking. Mitola’s vision about cognitive radio intended to build context-sensitive smart radios that are able to adapt to the wireless environment conditions while maintaining quality of service support for different applications. Artificial intelligence techniques including heuristics algorithms and machine learning are the shining tools that are employed to serve the new vision of cognitive radio. In addition, these techniques show a potential to be utilized in an efficient resource allocation for the upcoming 5G networks’ structures such as heterogeneous multi-tier 5G networks and heterogeneous cloud radio access networks due to their capability to allocate resources according to real-time data analytics. In this thesis, we study cognitive radio from a system point of view focusing closely on architectures, artificial intelligence techniques that can enable intelligent radio resource allocation and efficient radio parameters reconfiguration. We propose a modular cognitive resource management architecture, which facilitates a development of flexible control for

  5. Broadcast Expenses Controlling Techniques in Mobile Ad-hoc Networks: A Survey

    Directory of Open Access Journals (Sweden)

    Naeem Ahmad

    2016-07-01

    Full Text Available The blind flooding of query packets in route discovery more often characterizes the broadcast storm problem, exponentially increases energy consumption of intermediate nodes and congests the entire network. In such a congested network, the task of establishing the path between resources may become very complex and unwieldy. An extensive research work has been done in this area to improve the route discovery phase of routing protocols by reducing broadcast expenses. The purpose of this study is to provide a comparative analysis of existing broadcasting techniques for the route discovery phase, in order to bring about an efficient broadcasting technique for determining the route with minimum conveying nodes in ad-hoc networks. The study is designed to highlight the collective merits and demerits of such broadcasting techniques along with certain conclusions that would contribute to the choice of broadcasting techniques.

  6. Unified Approach to Modeling and Simulation of Space Communication Networks and Systems

    Science.gov (United States)

    Barritt, Brian; Bhasin, Kul; Eddy, Wesley; Matthews, Seth

    2010-01-01

    Network simulator software tools are often used to model the behaviors and interactions of applications, protocols, packets, and data links in terrestrial communication networks. Other software tools that model the physics, orbital dynamics, and RF characteristics of space systems have matured to allow for rapid, detailed analysis of space communication links. However, the absence of a unified toolset that integrates the two modeling approaches has encumbered the systems engineers tasked with the design, architecture, and analysis of complex space communication networks and systems. This paper presents the unified approach and describes the motivation, challenges, and our solution - the customization of the network simulator to integrate with astronautical analysis software tools for high-fidelity end-to-end simulation. Keywords space; communication; systems; networking; simulation; modeling; QualNet; STK; integration; space networks

  7. Understanding the Dynamics of MOOC Discussion Forums with Simulation Investigation for Empirical Network Analysis (SIENA)

    Science.gov (United States)

    Zhang, Jingjing; Skryabin, Maxim; Song, Xiongwei

    2016-01-01

    This study attempts to make inferences about the mechanisms that drive network change over time. It adopts simulation investigation for empirical network analysis to examine the patterns and evolution of relationships formed in the context of a massive open online course (MOOC) discussion forum. Four network effects--"homophily,"…

  8. Simulation of California's Major Reservoirs Outflow Using Data Mining Technique

    Science.gov (United States)

    Yang, T.; Gao, X.; Sorooshian, S.

    2014-12-01

    The reservoir's outflow is controlled by reservoir operators, which is different from the upstream inflow. The outflow is more important than the reservoir's inflow for the downstream water users. In order to simulate the complicated reservoir operation and extract the outflow decision making patterns for California's 12 major reservoirs, we build a data-driven, computer-based ("artificial intelligent") reservoir decision making tool, using decision regression and classification tree approach. This is a well-developed statistical and graphical modeling methodology in the field of data mining. A shuffled cross validation approach is also employed to extract the outflow decision making patterns and rules based on the selected decision variables (inflow amount, precipitation, timing, water type year etc.). To show the accuracy of the model, a verification study is carried out comparing the model-generated outflow decisions ("artificial intelligent" decisions) with that made by reservoir operators (human decisions). The simulation results show that the machine-generated outflow decisions are very similar to the real reservoir operators' decisions. This conclusion is based on statistical evaluations using the Nash-Sutcliffe test. The proposed model is able to detect the most influential variables and their weights when the reservoir operators make an outflow decision. While the proposed approach was firstly applied and tested on California's 12 major reservoirs, the method is universally adaptable to other reservoir systems.

  9. Modelling Altitude Information in Two-Dimensional Traffic Networks for Electric Mobility Simulation

    Directory of Open Access Journals (Sweden)

    Diogo Santos

    2016-06-01

    Full Text Available Elevation data is important for electric vehicle simulation. However, traffic simulators are often two-dimensional and do not offer the capability of modelling urban networks taking elevation into account. Specifically, SUMO - Simulation of Urban Mobility, a popular microscopic traffic simulator, relies on networks previously modelled with elevation data as to provide this information during simulations. This work tackles the problem of adding elevation data to urban network models - particularly for the case of the Porto urban network, in Portugal. With this goal in mind, a comparison between different altitude information retrieval approaches is made and a simple tool to annotate network models with altitude data is proposed. The work starts by describing the methodological approach followed during research and development, then describing and analysing its main findings. This description includes an in-depth explanation of the proposed tool. Lastly, this work reviews some related work to the subject.

  10. Molecular Dynamics Simulations of Polymer Networks Undergoing Sequential Cross-Linking and Scission Reactions

    DEFF Research Database (Denmark)

    Rottach, Dana R.; Curro, John G.; Budzien, Joanne

    2007-01-01

    The effects of sequential cross-linking and scission of polymer networks formed in two states of strain are investigated using molecular dynamics simulations. Two-stage networks are studied in which a network formed in the unstrained state (stage 1) undergoes additional cross-linking in a uniaxia......The effects of sequential cross-linking and scission of polymer networks formed in two states of strain are investigated using molecular dynamics simulations. Two-stage networks are studied in which a network formed in the unstrained state (stage 1) undergoes additional cross......, a fraction (quantified by the stress transfer function ) of the second-stage cross-links contribute to the effective first-stage cross-link density. The stress transfer functions extracted from the MD simulations of the reacting networks are found to be in very...

  11. Total alkalinity estimation using MLR and neural network techniques

    Science.gov (United States)

    Velo, A.; Pérez, F. F.; Tanhua, T.; Gilcoto, M.; Ríos, A. F.; Key, R. M.

    2013-02-01

    During the last decade, two important collections of carbon relevant hydrochemical data have become available: GLODAP and CARINA. These collections comprise a synthesis of bottle data for all ocean depths from many cruises collected over several decades. For a majority of the cruises at least two carbon parameters were measured. However, for a large number of stations, samples or even cruises, the carbonate system is under-determined (i.e., only one or no carbonate parameter was measured) resulting in data gaps for the carbonate system in these collections. A method for filling these gaps would be very useful, as it would help with estimations of the anthropogenic carbon (Cant) content or quantification of oceanic acidification. The aim of this work is to apply and describe, a 3D moving window multilinear regression algorithm (MLR) to fill gaps in total alkalinity (AT) of the CARINA and GLODAP data collections for the Atlantic. In addition to filling data gaps, the estimated AT values derived from the MLR are useful in quality control of the measurements of the carbonate system, as they can aid in the identification of outliers. For comparison, a neural network algorithm able to perform non-linear predictions was also designed. The goal here was to design an alternative approach to accomplish the same task of filling AT gaps. Both methods return internally consistent results, thereby giving confidence in our approach.

  12. Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2015-01-07

    Stochastic reaction networks (SRNs) is a class of continuous-time Markov chains intended to describe, from the kinetic point of view, the time-evolution of chemical systems in which molecules of different chemical species undergo a finite set of reaction channels. This talk is based on articles [4, 5, 6], where we are interested in the following problem: given a SRN, X, defined though its set of reaction channels, and its initial state, x0, estimate E (g(X(T))); that is, the expected value of a scalar observable, g, of the process, X, at a fixed time, T. This problem lead us to define a series of Monte Carlo estimators, M, such that, with high probability can produce values close to the quantity of interest, E (g(X(T))). More specifically, given a user-selected tolerance, TOL, and a small confidence level, η, find an estimator, M, based on approximate sampled paths of X, such that, P (|E (g(X(T))) − M| ≤ TOL) ≥ 1 − η; even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid path-simulation scheme based on the well-known stochastic simulation algorithm (SSA)[3] and the tau-leap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.

  13. CaloGAN: Simulating 3D high energy particle showers in multilayer electromagnetic calorimeters with generative adversarial networks

    Science.gov (United States)

    Paganini, Michela; de Oliveira, Luke; Nachman, Benjamin

    2018-01-01

    The precise modeling of subatomic particle interactions and propagation through matter is paramount for the advancement of nuclear and particle physics searches and precision measurements. The most computationally expensive step in the simulation pipeline of a typical experiment at the Large Hadron Collider (LHC) is the detailed modeling of the full complexity of physics processes that govern the motion and evolution of particle showers inside calorimeters. We introduce CaloGAN, a new fast simulation technique based on generative adversarial networks (GANs). We apply these neural networks to the modeling of electromagnetic showers in a longitudinally segmented calorimeter and achieve speedup factors comparable to or better than existing full simulation techniques on CPU (100 ×-1000 × ) and even faster on GPU (up to ˜105× ). There are still challenges for achieving precision across the entire phase space, but our solution can reproduce a variety of geometric shower shape properties of photons, positrons, and charged pions. This represents a significant stepping stone toward a full neural network-based detector simulation that could save significant computing time and enable many analyses now and in the future.

  14. Cross-section adjustment techniques for BWR adaptive simulation

    Science.gov (United States)

    Jessee, Matthew Anderson

    Computational capability has been developed to adjust multi-group neutron cross-sections to improve the fidelity of boiling water reactor (BWR) modeling and simulation. The method involves propagating multi-group neutron cross-section uncertainties through BWR computational models to evaluate uncertainties in key core attributes such as core k-effective, nodal power distributions, thermal margins, and in-core detector readings. Uncertainty-based inverse theory methods are then employed to adjust multi-group cross-sections to minimize the disagreement between BWR modeling predictions and measured plant data. For this work, measured plant data were virtually simulated in the form of perturbed 3-D nodal power distributions with discrepancies with predictions of the same order of magnitude as expected from plant data. Using the simulated plant data, multi-group cross-section adjustment reduces the error in core k-effective to less than 0.2% and the RMS error in nodal power to 4% (i.e. the noise level of the in-core instrumentation). To ensure that the adapted BWR model predictions are robust, Tikhonov regularization is utilized to control the magnitude of the cross-section adjustment. In contrast to few-group cross-section adjustment, which was the focus of previous research on BWR adaptive simulation, multigroup cross-section adjustment allows for future fuel cycle design optimization to include the determination of optimal fresh fuel assembly designs using the adjusted multi-group cross-sections. The major focus of this work is to efficiently propagate multi-group neutron cross-section uncertainty through BWR lattice physics calculations. Basic neutron cross-section uncertainties are provided in the form of multi-group cross-section covariance matrices. For energy groups in the resolved resonance energy range, the cross-section uncertainties are computed using an infinitely-dilute approximation of the neutron flux. In order to accurately account for spatial and

  15. Drift simulation of MH370 debris using superensemble techniques

    Science.gov (United States)

    Jansen, Eric; Coppini, Giovanni; Pinardi, Nadia

    2016-07-01

    On 7 March 2014 (UTC), Malaysia Airlines flight 370 vanished without a trace. The aircraft is believed to have crashed in the southern Indian Ocean, but despite extensive search operations the location of the wreckage is still unknown. The first tangible evidence of the accident was discovered almost 17 months after the disappearance. On 29 July 2015, a small piece of the right wing of the aircraft was found washed up on the island of Réunion, approximately 4000 km from the assumed crash site. Since then a number of other parts have been found in Mozambique, South Africa and on Rodrigues Island. This paper presents a numerical simulation using high-resolution oceanographic and meteorological data to predict the movement of floating debris from the accident. Multiple model realisations are used with different starting locations and wind drag parameters. The model realisations are combined into a superensemble, adjusting the model weights to best represent the discovered debris. The superensemble is then used to predict the distribution of marine debris at various moments in time. This approach can be easily generalised to other drift simulations where observations are available to constrain unknown input parameters. The distribution at the time of the accident shows that the discovered debris most likely originated from the wide search area between 28 and 35° S. This partially overlaps with the current underwater search area, but extends further towards the north. Results at later times show that the most probable locations to discover washed-up debris are along the African east coast, especially in the area around Madagascar. The debris remaining at sea in 2016 is spread out over a wide area and its distribution changes only slowly.

  16. Wind Turbine Rotor Simulation via CFD Based Actuator Disc Technique Compared to Detailed Measurement

    National Research Council Canada - National Science Library

    Esmail Mahmoodi; Ali Jafari; Alireza Keyhani

    2015-01-01

    .... The AD model as a combination of CFD technique and User Defined Functions codes (UDF), so-called UDF/AD model is used to simulate loads and performance of the rotor in three different wind speed tests...

  17. Queueing Models and Stability of Message Flows in Distributed Simulators of Open Queueing Networks

    OpenAIRE

    Gupta, Manish; Kumar, Anurag; Shorey, Rajeev

    1996-01-01

    In this paper we study message flow processes in distributed simulators of open queueing networks. We develop and study queueing models for distributed simulators with maximum lookahead sequencing. We characterize the external arrival process, and the message feedback process in the simulator of a simple queueing network with feedback. We show that a certain natural modelling construct for the arrival process is exactly correct, whereas an obvious model for the feedback process is wrong; we t...

  18. Social networks and smoking: exploring the effects of peer influence and smoker popularity through simulations.

    Science.gov (United States)

    Schaefer, David R; Adams, Jimi; Haas, Steven A

    2013-10-01

    Adolescent smoking and friendship networks are related in many ways that can amplify smoking prevalence. Understanding and developing interventions within such a complex system requires new analytic approaches. We draw on recent advances in dynamic network modeling to develop a technique that explores the implications of various intervention strategies targeted toward micro-level processes. Our approach begins by estimating a stochastic actor-based model using data from one school in the National Longitudinal Study of Adolescent Health. The model provides estimates of several factors predicting friendship ties and smoking behavior. We then use estimated model parameters to simulate the coevolution of friendship and smoking behavior under potential intervention scenarios. Namely, we manipulate the strength of peer influence on smoking and the popularity of smokers relative to nonsmokers. We measure how these manipulations affect smoking prevalence, smoking initiation, and smoking cessation. Results indicate that both peer influence and smoking-based popularity affect smoking behavior and that their joint effects are nonlinear. This study demonstrates how a simulation-based approach can be used to explore alternative scenarios that may be achievable through intervention efforts and offers new hypotheses about the association between friendship and smoking.

  19. Compression and Combining Based on Channel Shortening and Rank Reduction Technique for Cooperative Wireless Sensor Networks

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2013-12-18

    This paper investigates and compares the performance of wireless sensor networks where sensors operate on the principles of cooperative communications. We consider a scenario where the source transmits signals to the destination with the help of L sensors. As the destination has the capacity of processing only U out of these L signals, the strongest U signals are selected while the remaining (L?U) signals are suppressed. A preprocessing block similar to channel-shortening is proposed in this contribution. However, this preprocessing block employs a rank-reduction technique instead of channel-shortening. By employing this preprocessing, we are able to decrease the computational complexity of the system without affecting the bit error rate (BER) performance. From our simulations, it can be shown that these schemes outperform the channel-shortening schemes in terms of computational complexity. In addition, the proposed schemes have a superior BER performance as compared to channel-shortening schemes when sensors employ fixed gain amplification. However, for sensors which employ variable gain amplification, a tradeoff exists in terms of BER performance between the channel-shortening and these schemes. These schemes outperform channel-shortening scheme for lower signal-to-noise ratio.

  20. Vibration control of a class of semiactive suspension system using neural network and backstepping techniques

    Science.gov (United States)

    Zapateiro, M.; Luo, N.; Karimi, H. R.; Vehí, J.

    2009-08-01

    In this paper, we address the problem of designing the semiactive controller for a class of vehicle suspension system that employs a magnetorheological (MR) damper as the actuator. As the first step, an adequate model of the MR damper must be developed. Most of the models found in literature are based on the mechanical behavior of the device, with the Bingham and Bouc-Wen models being the most popular ones. These models can estimate the damping force of the device taking the control voltage and velocity inputs as variables. However, the inverse model, i.e., the model that computes the control variable (generally the voltage) is even more difficult to find due to the numerical complexity that implies the inverse of the nonlinear forward model. In our case, we develop a neural network being able to estimate the control voltage input to the MR damper, which is necessary for producing the optimal force predicted by the controller so as to reduce the vibrations. The controller is designed following the standard backstepping technique. The performance of the control system is evaluated by means of simulations in MATLAB/Simulink.

  1. Using simulation models to evaluate ape nest survey techniques.

    Directory of Open Access Journals (Sweden)

    Ryan H Boyko

    Full Text Available BACKGROUND: Conservationists frequently use nest count surveys to estimate great ape population densities, yet the accuracy and precision of the resulting estimates are difficult to assess. METHODOLOGY/PRINCIPAL FINDINGS: We used mathematical simulations to model nest building behavior in an orangutan population to compare the quality of the population size estimates produced by two of the commonly used nest count methods, the 'marked recount method' and the 'matrix method.' We found that when observers missed even small proportions of nests in the first survey, the marked recount method produced large overestimates of the population size. Regardless of observer reliability, the matrix method produced substantial overestimates of the population size when surveying effort was low. With high observer reliability, both methods required surveying approximately 0.26% of the study area (0.26 km(2 out of 100 km(2 in this simulation to achieve an accurate estimate of population size; at or above this sampling effort both methods produced estimates within 33% of the true population size 50% of the time. Both methods showed diminishing returns at survey efforts above 0.26% of the study area. The use of published nest decay estimates derived from other sites resulted in widely varying population size estimates that spanned nearly an entire order of magnitude. The marked recount method proved much better at detecting population declines, detecting 5% declines nearly 80% of the time even in the first year of decline. CONCLUSIONS/SIGNIFICANCE: These results highlight the fact that neither nest surveying method produces highly reliable population size estimates with any reasonable surveying effort, though either method could be used to obtain a gross population size estimate in an area. Conservation managers should determine if the quality of these estimates are worth the money and effort required to produce them, and should generally limit surveying effort to

  2. Concept for a Common Performance Measurement System for Unit Training at the National Training Center (NTC) and with Simulation Networking (SIMNET) platoon-Movement to Contact

    Science.gov (United States)

    1990-09-01

    Simulation Networking (SIMNET) Platoon-Movement to Contact James W. Kerins and Nancy K. Atwood BDM International, Inc. DTIC’ CTE SEP 2 6 190 Field Unit at...terrain search techniques. tiffed in the unit’s SOP are reviewed Engagement techniquesare reviewed and followed. and rehearsed: * "Two Football Field...34 technique for jet aircraft * "Half Football Field" technique for slow aircraft or helicopters • "Reference Point or Series of Reference Points

  3. Mitigating Handoff Call Dropping in Wireless Cellular Networks: A Call Admission Control Technique

    Science.gov (United States)

    Ekpenyong, Moses Effiong; Udoh, Victoria Idia; Bassey, Udoma James

    2016-06-01

    Handoff management has been an important but challenging issue in the field of wireless communication. It seeks to maintain seamless connectivity of mobile users changing their points of attachment from one base station to another. This paper derives a call admission control model and establishes an optimal step-size coefficient (k) that regulates the admission probability of handoff calls. An operational CDMA network carrier was investigated through the analysis of empirical data collected over a period of 1 month, to verify the performance of the network. Our findings revealed that approximately 23 % of calls in the existing system were lost, while 40 % of the calls (on the average) were successfully admitted. A simulation of the proposed model was then carried out under ideal network conditions to study the relationship between the various network parameters and validate our claim. Simulation results showed that increasing the step-size coefficient degrades the network performance. Even at optimum step-size (k), the network could still be compromised in the presence of severe network crises, but our model was able to recover from these problems and still functions normally.

  4. Imagining the future: The core episodic simulation network dissociates as a function of timecourse and the amount of simulated information.

    Science.gov (United States)

    Thakral, Preston P; Benoit, Roland G; Schacter, Daniel L

    2017-05-01

    Neuroimaging data indicate that episodic memory (i.e., remembering specific past experiences) and episodic simulation (i.e., imagining specific future experiences) are associated with enhanced activity in a common set of neural regions, often referred to as the core network. This network comprises the hippocampus, parahippocampal cortex, lateral and medial parietal cortex, lateral temporal cortex, and medial prefrontal cortex. Evidence for a core network has been taken as support for the idea that episodic memory and episodic simulation are supported by common processes. Much remains to be learned about how specific core network regions contribute to specific aspects of episodic simulation. Prior neuroimaging studies of episodic memory indicate that certain regions within the core network are differentially sensitive to the amount of information recollected (e.g., the left lateral parietal cortex). In addition, certain core network regions dissociate as a function of their timecourse of engagement during episodic memory (e.g., transient activity in the posterior hippocampus and sustained activity in the left lateral parietal cortex). In the current study, we assessed whether similar dissociations could be observed during episodic simulation. We found that the left lateral parietal cortex modulates as a function of the amount of simulated details. Of particular interest, while the hippocampus was insensitive to the amount of simulated details, we observed a temporal dissociation within the hippocampus: transient activity occurred in relatively posterior portions of the hippocampus and sustained activity occurred in anterior portions. Because the posterior hippocampal and lateral parietal findings parallel those observed during episodic memory, the present results add to the evidence that episodic memory and episodic simulation are supported by common processes. Critically, the present study also provides evidence that regions within the core network support

  5. Modeling and Simulation of Handover Scheme in Integrated EPON-WiMAX Networks

    DEFF Research Database (Denmark)

    Yan, Ying; Dittmann, Lars

    2011-01-01

    In this paper, we tackle the seamless handover problem in integrated optical wireless networks. Our model applies for the convergence network of EPON and WiMAX and a mobilityaware signaling protocol is proposed. The proposed handover scheme, Integrated Mobility Management Scheme (IMMS), is assisted...... by enhancing the traditional MPCP signaling protocol, which cooperatively collects mobility information from the front-end wireless network and makes centralized bandwidth allocation decisions in the backhaul optical network. The integrated network architecture and the joint handover scheme are simulated using...... OPNET modeler. Results show validation of the protocol, i.e., integrated handover scheme gains better network performances....

  6. Stochastic simulation of HIV population dynamics through complex network modelling

    NARCIS (Netherlands)

    Sloot, P. M. A.; Ivanov, S. V.; Boukhanovsky, A. V.; van de Vijver, D. A. M. C.; Boucher, C. A. B.

    We propose a new way to model HIV infection spreading through the use of dynamic complex networks. The heterogeneous population of HIV exposure groups is described through a unique network degree probability distribution. The time evolution of the network nodes is modelled by a Markov process and

  7. Stochastic simulation of HIV population dynamics through complex network modelling

    NARCIS (Netherlands)

    Sloot, P.M.A.; Ivanov, S.V.; Boukhanovsky, A.V.; van de Vijver, D.A.M.C.; Boucher, C.A.B.

    2008-01-01

    We propose a new way to model HIV infection spreading through the use of dynamic complex networks. The heterogeneous population of HIV exposure groups is described through a unique network degree probability distribution. The time evolution of the network nodes is modelled by a Markov process and

  8. Validation of Mobility Simulations via Measurement Drive Tests in an Operational Network

    DEFF Research Database (Denmark)

    Gimenez, Lucas Chavarria; Barbera, Simone; Polignano, Michele

    2015-01-01

    Simulations play a key role in validating new concepts in cellular networks, since most of the features proposed and introduced into the standards are typically first studied by means of simulations. In order to increase the trustworthiness of the simulation results, proper models and settings must...... to reality. The presented study is based on drive tests measurements and explicit simulations of an operator network in the city of Aalborg (Denmark) – modelling a real 3D environment and using a commonly accepted dynamic system level simulation methodology. In short, the presented results show...

  9. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.; Aziz, Khalid

    2001-08-23

    Research results for the second year of this project on the development of improved modeling techniques for non-conventional (e.g., horizontal, deviated or multilateral) wells were presented. The overall program entails the development of enhanced well modeling and general simulation capabilities. A general formulation for black-oil and compositional reservoir simulation was presented.

  10. Energy saving techniques applied over a nation-wide mobile network

    DEFF Research Database (Denmark)

    Perez, Eva; Frank, Philipp; Micallef, Gilbert

    2014-01-01

    Traffic carried over wireless networks has grown significantly in recent years and actual forecasts show that this trend is expected to continue. However, the rapid mobile data explosion and the need for higher data rates comes at a cost of increased complexity and energy consumption of the mobile...... on the energy consumption based on a nation-wide network of a leading European operator. By means of an extensive analysis, we show that with the proposed techniques significant energy savings can be realized....

  11. NEW BURST ASSEMBLY AND SCHEDULING TECHNIQUE FOR OPTICAL BURST SWITCHING NETWORKS

    OpenAIRE

    Kavitha, V.; V.Palanisamy

    2013-01-01

    The Optical Burst Switching is a new switching technology that efficiently utilizes the bandwidth in the optical layer. The key areas to be concentrated in Optical Burst Switching (OBS) networks are the burst assembly and burst scheduling i.e., assignment of wavelengths to the incoming bursts. This study presents a New Burst Assembly and Scheduling (NBAS) technique in a simultaneous multipath transmission for burst loss recovery in OBS networks. A Redundant Burst Segmentation (RBS) is used fo...

  12. Configuring Simulation Models Using CAD Techniques: A New Approach to Warehouse Design

    OpenAIRE

    Brito, António Ernesto da Silva Carvalho

    1992-01-01

    The research reported in this thesis is related to the development and use of software tools for supporting warehouse design and management. Computer Aided Design and Simulation techniques are used to develop a software system that forms the basis of a Decision Support System for warehouse design. The current position of simulation software is reviewed. It is investigated how appropriate current simulation software is for warehouse modelling. Special attention is given to Vi...

  13. How Crime Spreads Through Imitation in Social Networks: A Simulation Model

    Science.gov (United States)

    Punzo, Valentina

    In this chapter an agent-based model for investigating how crime spreads through social networks is presented. Some theoretical issues related to the sociological explanation of crime are tested through simulation. The agent-based simulation allows us to investigate the relative impact of some mechanisms of social influence on crime, within a set of controlled simulated experiments.

  14. Experimental Evaluation of Simulation Abstractions for Wireless Sensor Network MAC Protocols

    NARCIS (Netherlands)

    Halkes, G.P.; Langendoen, K.G.

    2010-01-01

    The evaluation ofMAC protocols forWireless Sensor Networks (WSNs) is often performed through simulation. These simulations necessarily abstract away from reality inmany ways. However, the impact of these abstractions on the results of the simulations has received only limited attention. Moreover,

  15. NeuReal: an interactive simulation system for implementing artificial dendrites and large hybrid networks.

    Science.gov (United States)

    Hughes, Stuart W; Lorincz, Magor; Cope, David W; Crunelli, Vincenzo

    2008-04-30

    The dynamic clamp is a technique which allows the introduction of artificial conductances into living cells. Up to now, this technique has been mainly used to add small numbers of 'virtual' ion channels to real cells or to construct small hybrid neuronal circuits. In this paper we describe a prototype computer system, NeuReal, that extends the dynamic clamp technique to include (i) the attachment of artificial dendritic structures consisting of multiple compartments and (ii) the construction of large hybrid networks comprising several hundred biophysically realistic modelled neurons. NeuReal is a fully interactive system that runs on Windows XP, is written in a combination of C++ and assembler, and uses the Microsoft DirectX application programming interface (API) to achieve high-performance graphics. By using the sampling hardware-based representation of membrane potential at all stages of computation and by employing simple look-up tables, NeuReal can simulate over 1000 independent Hodgkin and Huxley type conductances in real-time on a modern personal computer (PC). In addition, whilst not being a hard real-time system, NeuReal still offers reliable performance and tolerable jitter levels up to an update rate of 50kHz. A key feature of NeuReal is that rather than being a simple dedicated dynamic clamp, it operates as a fast simulation system within which neurons can be specified as either real or simulated. We demonstrate the power of NeuReal with several example experiments and argue that it provides an effective tool for examining various aspects of neuronal function.

  16. IMPLEMENTATION OF IMPROVED NETWORK LIFETIME TECHNIQUE FOR WSN USING CLUSTER HEAD ROTATION AND SIMULTANEOUS RECEPTION

    Directory of Open Access Journals (Sweden)

    Arun Vasanaperumal

    2015-11-01

    Full Text Available There are number of potential applications of Wireless Sensor Networks (WSNs like wild habitat monitoring, forest fire detection, military surveillance etc. All these applications are constrained for power from a stand along battery power source. So it becomes of paramount importance to conserve the energy utilized from this power source. A lot of efforts have gone into this area recently and it remains as one of the hot research areas. In order to improve network lifetime and reduce average power consumption, this study proposes a novel cluster head selection algorithm. Clustering is the preferred architecture when the numbers of nodes are larger because it results in considerable power savings for large networks as compared to other ones like tree or star. Since majority of the applications generally involve more than 30 nodes, clustering has gained widespread importance and is most used network architecture. The optimum number of clusters is first selected based on the number of nodes in the network. When the network is in operation the cluster heads in a cluster are rotated periodically based on the proposed cluster head selection algorithm to increase the network lifetime. Throughout the network single-hop communication methodology is assumed. This work will serve as an encouragement for further advances in the low power techniques for implementing Wireless Sensor Networks (WSNs.

  17. Hybrid Network Simulation for the ATLAS Trigger and Data Acquisition (TDAQ) System

    CERN Document Server

    Bonaventura, Matias Alejandro; The ATLAS collaboration; Castro, Rodrigo Daniel; Foguelman, Daniel Jacob

    2015-01-01

    The poster shows the ongoing research in the ATLAS TDAQ group in collaboration with the University of Buenos Aires in the area of hybrid data network simulations. he Data Network and Processing Cluster filters data in real-time, achieving a rejection factor in the order of 40000x and has real-time latency constrains. The dataflow between the processing units (TPUs) and Readout System (ROS) presents a “TCP Incast”-type network pathology which TCP cannot handle it efficiently. A credits system is in place which limits rate of queries and reduces latency. This large computer network, and the complex dataflow has been modelled and simulated using a PowerDEVS, a DEVS-based simulator. The simulation has been validated and used to produce what-if scenarios in the real network. Network Simulation with Hybrid Flows: Speedups and accuracy, combined • For intensive network traffic, Discrete Event simulation models (packet-level granularity) soon becomes prohibitive: Too high computing demands. • Fluid Flow simul...

  18. ENERGY EFFICIENCY ANALYSIS OF ERROR CORRECTION TECHNIQUES IN UNDERWATER WIRELESS SENSOR NETWORKS

    Directory of Open Access Journals (Sweden)

    M. NORDIN B. ZAKARIA

    2011-02-01

    Full Text Available Research in underwater acoustic networks has been developed rapidly to support large variety of applications such as mining equipment and environmental monitoring. As in terrestrial sensor networks; reliable data transport is demanded in underwater sensor networks. The energy efficiency of error correction technique should be considered because of the severe energy constraints of underwater wireless sensor networks. Forward error correction (FEC andautomatic repeat request (ARQ are the two main error correction techniques in underwater networks. In this paper, a mathematical energy efficiency analysis for FEC and ARQ techniques in underwater environment has been done based on communication distance and packet size. The effects of wind speed, and shipping factor are studied. A comparison between FEC and ARQ in terms of energy efficiency is performed; it is found that energy efficiency of both techniquesincreases with increasing packet size in short distances, but decreases in longer distances. There is also a cut-off distance below which ARQ is more energy efficient than FEC, and after which FEC is more energy efficient than ARQ. This cut-off distance decreases by increasing wind speed. Wind speed has great effecton energy efficiency where as shipping factor has unnoticeable effect on energy efficiency for both techniques.

  19. A novel wavelet neural network based pathological stage detection technique for an oral precancerous condition

    Science.gov (United States)

    Paul, R R; Mukherjee, A; Dutta, P K; Banerjee, S; Pal, M; Chatterjee, J; Chaudhuri, K; Mukkerjee, K

    2005-01-01

    Aim: To describe a novel neural network based oral precancer (oral submucous fibrosis; OSF) stage detection method. Method: The wavelet coefficients of transmission electron microscopy images of collagen fibres from normal oral submucosa and OSF tissues were used to choose the feature vector which, in turn, was used to train the artificial neural network. Results: The trained network was able to classify normal and oral precancer stages (less advanced and advanced) after obtaining the image as an input. Conclusions: The results obtained from this proposed technique were promising and suggest that with further optimisation this method could be used to detect and stage OSF, and could be adapted for other conditions. PMID:16126873

  20. Application of artificial neural networks with backpropagation technique in the financial data

    Science.gov (United States)

    Jaiswal, Jitendra Kumar; Das, Raja

    2017-11-01

    The propensity of applying neural networks has been proliferated in multiple disciplines for research activities since the past recent decades because of its powerful control with regulatory parameters for pattern recognition and classification. It is also being widely applied for forecasting in the numerous divisions. Since financial data have been readily available due to the involvement of computers and computing systems in the stock market premises throughout the world, researchers have also developed numerous techniques and algorithms to analyze the data from this sector. In this paper we have applied neural network with backpropagation technique to find the data pattern from finance section and prediction for stock values as well.

  1. Parallel Reservoir Simulations with Sparse Grid Techniques and Applications to Wormhole Propagation

    KAUST Repository

    Wu, Yuanqing

    2015-09-08

    In this work, two topics of reservoir simulations are discussed. The first topic is the two-phase compositional flow simulation in hydrocarbon reservoir. The major obstacle that impedes the applicability of the simulation code is the long run time of the simulation procedure, and thus speeding up the simulation code is necessary. Two means are demonstrated to address the problem: parallelism in physical space and the application of sparse grids in parameter space. The parallel code can gain satisfactory scalability, and the sparse grids can remove the bottleneck of flash calculations. Instead of carrying out the flash calculation in each time step of the simulation, a sparse grid approximation of all possible results of the flash calculation is generated before the simulation. Then the constructed surrogate model is evaluated to approximate the flash calculation results during the simulation. The second topic is the wormhole propagation simulation in carbonate reservoir. In this work, different from the traditional simulation technique relying on the Darcy framework, we propose a new framework called Darcy-Brinkman-Forchheimer framework to simulate wormhole propagation. Furthermore, to process the large quantity of cells in the simulation grid and shorten the long simulation time of the traditional serial code, standard domain-based parallelism is employed, using the Hypre multigrid library. In addition to that, a new technique called “experimenting field approach” to set coefficients in the model equations is introduced. In the 2D dissolution experiments, different configurations of wormholes and a series of properties simulated by both frameworks are compared. We conclude that the numerical results of the DBF framework are more like wormholes and more stable than the Darcy framework, which is a demonstration of the advantages of the DBF framework. The scalability of the parallel code is also evaluated, and good scalability can be achieved. Finally, a mixed

  2. Spatial Estimation, Data Assimilation and Stochastic Conditional Simulation using the Counterpropagation Artificial Neural Network

    Science.gov (United States)

    Besaw, L. E.; Rizzo, D. M.; Boitnoitt, G. N.

    2006-12-01

    Accurate, yet cost effective, sites characterization and analysis of uncertainty are the first steps in remediation efforts at sites with subsurface contamination. From the time of source identification to the monitoring and assessment of a remediation design, the management objectives change, resulting in increased costs and the need for additional data acquisition. Parameter estimation is a key component in reliable site characterization, contaminant flow and transport predictions, plume delineation and many other data management goals. We implement a data-driven parameter estimation technique using a counterpropagation Artificial Neural Network (ANN) that is able to incorporate multiple types of data. This method is applied to estimates of geophysical properties measured on a slab of Berea sandstone and delineation of the leachate plume migrating from a landfill in upstate N.Y. The estimates generated by the ANN have been found to be statistically similar to estimates generated using conventional geostatistical kriging methods. The associated parameter uncertainty in site characterization, due to sparsely distributed samples (spatial or temporal) and incomplete site knowledge, is of major concern in resource mining and environmental engineering. We also illustrate the ability of the ANN method to perform conditional simulation using the spatial structure of parameters identified with semi-variogram analysis. This method allows for the generation of simulations that respect the observed measurement data, as well as the data's underlying spatial structure. The method of conditional simulation is used in a 3-dimensional application to estimate the uncertainty of soil lithology.

  3. On the Simulation-Based Reliability of Complex Emergency Logistics Networks in Post-Accident Rescues.

    Science.gov (United States)

    Wang, Wei; Huang, Li; Liang, Xuedong

    2018-01-06

    This paper investigates the reliability of complex emergency logistics networks, as reliability is crucial to reducing environmental and public health losses in post-accident emergency rescues. Such networks' statistical characteristics are analyzed first. After the connected reliability and evaluation indices for complex emergency logistics networks are effectively defined, simulation analyses of network reliability are conducted under two different attack modes using a particular emergency logistics network as an example. The simulation analyses obtain the varying trends in emergency supply times and the ratio of effective nodes and validates the effects of network characteristics and different types of attacks on network reliability. The results demonstrate that this emergency logistics network is both a small-world and a scale-free network. When facing random attacks, the emergency logistics network steadily changes, whereas it is very fragile when facing selective attacks. Therefore, special attention should be paid to the protection of supply nodes and nodes with high connectivity. The simulation method provides a new tool for studying emergency logistics networks and a reference for similar studies.

  4. The role of simulation in the design of a neural network chip

    Science.gov (United States)

    Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.

    1993-01-01

    An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.

  5. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    Science.gov (United States)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  6. Comparative analysis of numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence

    Science.gov (United States)

    Lachinova, Svetlana L.; Vorontsov, Mikhail A.; Filimonov, Grigory A.; LeMaster, Daniel A.; Trippel, Matthew E.

    2017-07-01

    Computational efficiency and accuracy of wave-optics-based Monte-Carlo and brightness function numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence are evaluated. Simulation results are compared with theoretical estimates based on known analytical solutions for the modulation transfer function of an imaging system and the long-exposure image of a Gaussian-shaped incoherent light source. It is shown that the accuracy of both techniques is comparable over the wide range of path lengths and atmospheric turbulence conditions, whereas the brightness function technique is advantageous in terms of the computational speed.

  7. CoSimulating Communication Networks and Electrical System for Performance Evaluation in Smart Grid

    Directory of Open Access Journals (Sweden)

    Hwantae Kim

    2018-01-01

    Full Text Available In smart grid research domain, simulation study is the first choice, since the analytic complexity is too high and constructing a testbed is very expensive. However, since communication infrastructure and the power grid are tightly coupled with each other in the smart grid, a well-defined combination of simulation tools for the systems is required for the simulation study. Therefore, in this paper, we propose a cosimulation work called OOCoSim, which consists of OPNET (network simulation tool and OpenDSS (power system simulation tool. By employing the simulation tool, an organic and dynamic cosimulation can be realized since both simulators operate on the same computing platform and provide external interfaces through which the simulation can be managed dynamically. In this paper, we provide OOCoSim design principles including a synchronization scheme and detailed descriptions of its implementation. To present the effectiveness of OOCoSim, we define a smart grid application model and conduct a simulation study to see the impact of the defined application and the underlying network system on the distribution system. The simulation results show that the proposed OOCoSim can successfully simulate the integrated scenario of the power and network systems and produce the accurate effects of the networked control in the smart grid.

  8. Simulation-driven design by knowledge-based response correction techniques

    CERN Document Server

    Koziel, Slawomir

    2016-01-01

    Focused on efficient simulation-driven multi-fidelity optimization techniques, this monograph on simulation-driven optimization covers simulations utilizing physics-based low-fidelity models, often based on coarse-discretization simulations or other types of simplified physics representations, such as analytical models. The methods presented in the book exploit as much as possible any knowledge about the system or device of interest embedded in the low-fidelity model with the purpose of reducing the computational overhead of the design process. Most of the techniques described in the book are of response correction type and can be split into parametric (usually based on analytical formulas) and non-parametric, i.e., not based on analytical formulas. The latter, while more complex in implementation, tend to be more efficient. The book presents a general formulation of response correction techniques as well as a number of specific methods, including those based on correcting the low-fidelity model response (out...

  9. A comparison of six metamodeling techniques applied to building performance simulations

    DEFF Research Database (Denmark)

    Østergård, Torben; Jensen, Rasmus Lund; Maagaard, Steffen Enersen

    2018-01-01

    Highlights •Linear regression (OLS), support vector regression (SVR), regression splines (MARS). •Random forest (RF), Gaussian processes (GPR), neural network (NN). •Accuracy, time, interpretability, ease-of-use, model selection, and robustness. •13 problems modelled for 9 training set sizes...... spanning from 32 to 8192 simulations. •Methodology for comparison using exhaustive grid searches and sensitivity analysis....

  10. Operational reliability evaluation of restructured power systems with wind power penetration utilizing reliability network equivalent and time-sequential simulation approaches

    DEFF Research Database (Denmark)

    Ding, Yi; Cheng, Lin; Zhang, Yonghong

    2014-01-01

    and reserve provides, fast reserve providers and transmission network in restructured power systems. A contingency management schema for real time operation considering its coupling with the day-ahead market is proposed. The time-sequential Monte Carlo simulation is used to model the chronological...... systems. The conventional long-term reliability evaluation techniques have been well developed, which have been more focused on planning and expansion rather than operation of power systems. This paper proposes a new technique for evaluating operational reliabilities of restructured power systems...... with high wind power penetration. The proposed technique is based on the combination of the reliability network equivalent and time-sequential simulation approaches. The operational reliability network equivalents are developed to represent reliability models of wind farms, conventional generation...

  11. Reproducing and Extending Real Testbed Evaluation of GeoNetworking Implementation in Simulated Networks

    OpenAIRE

    Tao, Ye; Tsukada, Manabu; LI, Xin; Kakiuchi, Masatoshi; Esaki, Hiroshi

    2016-01-01

    International audience; Vehicular Ad-hoc Network (VANET) is a type of Mobile Ad-hoc Network (MANET) which is specialized for vehicle communication. GeoNetworking is a new standardized network layer protocol for VANET which employs geolocation based routing. However, conducting large scale experiments in GeoNetworking softwares is extremely difficult, since it requires many extra factors such as vehicles, stuff, place, terrain, etc. In this paper, we propose a method to reproduce realistic res...

  12. A STATISTICAL CORRELATION TECHNIQUE AND A NEURAL-NETWORK FOR THE MOTION CORRESPONDENCE PROBLEM

    NARCIS (Netherlands)

    VANDEEMTER, JH; MASTEBROEK, HAK

    A statistical correlation technique (SCT) and two variants of a neural network are presented to solve the motion correspondence problem. Solutions of the motion correspondence problem aim to maintain the identities of individuated elements as they move. In a preprocessing stage, two snapshots of a

  13. A control technique for integration of DG units to the electrical networks

    DEFF Research Database (Denmark)

    Pouresmaeil, Edris; Miguel-Espinar, Carlos; Massot-Campos, Miquel

    2013-01-01

    This paper deals with a multiobjective control technique for integration of distributed generation (DG) resources to the electrical power network. The proposed strategy provides compensation for active, reactive, and harmonic load current components during connection of DG link to the grid. The d...

  14. Overview of the neural network based technique for monitoring of road condition via reconstructed road profiles

    CSIR Research Space (South Africa)

    Ngwangwa, HM

    2008-07-01

    Full Text Available on the road and driver to assess the integrity of road and vehicle infrastructure. In this paper, vehicle vibration data are applied to an artificial neural network to reconstruct the corresponding road surface profiles. The results show that the technique...

  15. Performance Evaluation and Parameter Optimization of Wavelength Division Multiplexing Networks with Importance Sampling Techniques

    NARCIS (Netherlands)

    Remondo Bueno, D.; Srinivasan, R.; Nicola, V.F.; van Etten, Wim; Tattje, H.E.P.

    1998-01-01

    In this paper new adaptive importance sampling techniques are applied to the performance evaluation and parameter optimization of wavelength division multiplexing (WDM) network impaired by crosstalk in an optical cross-connect. Worst-case analysis is carried out including all the beat noise terms

  16. Credibility and validation of simulation models for tactical IP networks

    NARCIS (Netherlands)

    Boltjes, B.; Thiele, F.; Diaz, I.F.

    2007-01-01

    The task of TNO is to provide predictions of the scalability and performance of the new all-IP tactical networks of the Royal Netherlands Army (RNLA) that are likely to be fielded. The inherent properties of fielded tactical networks, such as low bandwidth and Quality of Service (QoS) policies

  17. Evaluation and Simulation of Common Video Conference Traffics in Communication Networks

    Directory of Open Access Journals (Sweden)

    Farhad faghani

    2014-01-01

    Full Text Available Multimedia traffics are the basic traffics in data communication networks. Especially Video conferences are the most desirable traffics in huge networks(wired, wireless, …. Traffic modeling can help us to evaluate the real networks. So, in order to have good services in data communication networks which provide multimedia services, QoS will be very important .In this research we tried to have an exact traffic model design and simulation to overcome QoS challenges. Also, we predict bandwidth by Kalman filter in Ethernet networks.

  18. Permanent Set of Cross-Linking Networks: Comparison of Theory with Molecular Dynamics Simulations

    DEFF Research Database (Denmark)

    Rottach, Dana R.; Curro, John G.; Budzien, Joanne

    2006-01-01

    The permanent set of cross-linking networks is studied by molecular dynamics. The uniaxial stress for a bead-spring polymer network is investigated as a function of strain and cross-link density history, where cross-links are introduced in unstrained and strained networks. The permanent set...... is found from the strain of the network after it returns to the state-of-ease where the stress is zero. The permanent set simulations are compared with theory using the independent network hypothesis, together with the various theoretical rubber elasticity theories: affine, phantom, constrained junction...

  19. Simulation of Supply-Chain Networks: A Source of Innovation and Competitive Advantage for Small and Medium-Sized Enterprises

    Directory of Open Access Journals (Sweden)

    Giacomo Liotta

    2012-11-01

    Full Text Available On a daily basis, enterprises of all sizes cope with the turbulence and volatility of market demands, cost variability, and severe pressure from globally distributed competitors. Managing uncertainty about future demand requirements and volumes in supply-chain networks has become a priority. One of the ways to deal with uncertainty is the utilization of simulation techniques and tools, which provide greater predictability of decision-making outcomes. For example, simulation has been widely applied in decision-making processes related to global logistics and production networks at the strategic, tactical, and operational levels, where it is used to predict the impact of decisions before their implementation in complex and uncertain environments. Large enterprises are inclined to use simulation tools whereas small and medium-sized enterprises seem to underestimate its advantages. The objective of this article is to emphasize the relevance of simulation for the design and management of supply-chain networks from the perspective of small and medium-sized firms.

  20. Impact of Loss Synchronization on Reliable High Speed Networks: A Model Based Simulation

    Directory of Open Access Journals (Sweden)

    Suman Kumar

    2014-01-01

    Full Text Available Contemporary nature of network evolution demands for simulation models which are flexible, scalable, and easily implementable. In this paper, we propose a fluid based model for performance analysis of reliable high speed networks. In particular, this paper aims to study the dynamic relationship between congestion control algorithms and queue management schemes, in order to develop a better understanding of the causal linkages between the two. We propose a loss synchronization module which is user configurable. We validate our model through simulations under controlled settings. Also, we present a performance analysis to provide insights into two important issues concerning 10 Gbps high speed networks: (i impact of bottleneck buffer size on the performance of 10 Gbps high speed network and (ii impact of level of loss synchronization on link utilization-fairness tradeoffs. The practical impact of the proposed work is to provide design guidelines along with a powerful simulation tool to protocol designers and network developers.

  1. Simulation of mixed switched-capacitor/digital networks with signal-driven switches

    Science.gov (United States)

    Suyama, Ken; Tsividis, Yannis P.; Fang, San-Chin

    1990-12-01

    The simulation of mixed switched-capacitor/digital (SC/D) networks containing capacitors, independent and linear-dependent voltage sources, switches controlled either by periodic or nonperiodic Boolean signals, latched comparators, and logic gates is considered. A unified linear switched-capacitor network (SCN) and mixed SC/D network simulator, SWITCAP2, and its applications to several widely used and novel nonlinear SCNs are discussed. The switches may be controlled by periodic waveforms and by nonperiodic waveforms from the outputs of comparators and logic gates. The signal-dependent modification of network topology through the comparators, logic gates, and signal-driven switches makes the modeling of various nonlinear switched-capacitor circuits possible. Simulation results for a pulse-code modulation (PCM) voice encoder, a sigma-delta modulator, a neural network, and a phase-locked loop (PLL) are presented to demonstrate the flexibility of the approach.

  2. Enterprise Networks for Competences Exchange: A Simulation Model

    Science.gov (United States)

    Remondino, Marco; Pironti, Marco; Pisano, Paola

    A business process is a set of logically related tasks performed to achieve a defined business and related to improving organizational processes. Process innovation can happen at various levels: incrementally, redesign of existing processes, new processes. The knowledge behind process innovation can be shared, acquired, changed and increased by the enterprises inside a network. An enterprise can decide to exploit innovative processes it owns, thus potentially gaining competitive advantage, but risking, in turn, that other players could reach the same technological levels. Or it could decide to share it, in exchange for other competencies or money. These activities could be the basis for a network formation and/or impact the topology of an existing network. In this work an agent based model is introduced (E3), aiming to explore how a process innovation can facilitate network formation, affect its topology, induce new players to enter the market and spread onto the network by being shared or developed by new players.

  3. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks

    Science.gov (United States)

    Vestergaard, Christian L.; Génois, Mathieu

    2015-01-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860

  4. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    National Research Council Canada - National Science Library

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs...

  5. Accelerated Gillespie Algorithm for Gas–Grain Reaction Network Simulations Using Quasi-steady-state Assumption

    Science.gov (United States)

    Chang, Qiang; Lu, Yang; Quan, Donghui

    2017-12-01

    Although the Gillespie algorithm is accurate in simulating gas–grain reaction networks, so far its computational cost is so expensive that it cannot be used to simulate chemical reaction networks that include molecular hydrogen accretion or the chemical evolution of protoplanetary disks. We present an accelerated Gillespie algorithm that is based on a quasi-steady-state assumption with the further approximation that the population distribution of transient species depends only on the accretion and desorption processes. The new algorithm is tested against a few reaction networks that are simulated by the regular Gillespie algorithm. We found that the less likely it is that transient species are formed and destroyed on grain surfaces, the more accurate the new method is. We also apply the new method to simulate reaction networks that include molecular hydrogen accretion. The results show that surface chemical reactions involving molecular hydrogen are not important for the production of surface species under standard physical conditions of dense molecular clouds.

  6. ABCDecision: A Simulation Platform for Access Selection Algorithms in Heterogeneous Wireless Networks

    Directory of Open Access Journals (Sweden)

    Guy Pujolle

    2010-01-01

    Full Text Available We present a simulation platform for access selection algorithms in heterogeneous wireless networks, called “ABCDecision”. The simulator implements the different parts of an Always Best Connected (ABC system, including Access Technology Selector (ATS, Radio Access Networks (RANs, and users. After describing the architecture of the simulator, we show an overview of the existing decision algorithms for access selection. Then we propose a new selection algorithm in heterogeneous networks and we run a set of simulations to evaluate the performance of the proposed algorithm in comparison with the existing ones. The performance results, in terms of the occupancy rate, show that our algorithm achieves a load balancing distribution between networks by taking into consideration the capacities of the available cells.

  7. An Extended N-Player Network Game and Simulation of Four Investment Strategies on a Complex Innovation Network.

    Science.gov (United States)

    Zhou, Wen; Koptyug, Nikita; Ye, Shutao; Jia, Yifan; Lu, Xiaolong

    2016-01-01

    As computer science and complex network theory develop, non-cooperative games and their formation and application on complex networks have been important research topics. In the inter-firm innovation network, it is a typical game behavior for firms to invest in their alliance partners. Accounting for the possibility that firms can be resource constrained, this paper analyzes a coordination game using the Nash bargaining solution as allocation rules between firms in an inter-firm innovation network. We build an extended inter-firm n-player game based on nonidealized conditions, describe four investment strategies and simulate the strategies on an inter-firm innovation network in order to compare their performance. By analyzing the results of our experiments, we find that our proposed greedy strategy is the best-performing in most situations. We hope this study provides a theoretical insight into how firms make investment decisions.

  8. An Extended N-Player Network Game and Simulation of Four Investment Strategies on a Complex Innovation Network.

    Directory of Open Access Journals (Sweden)

    Wen Zhou

    Full Text Available As computer science and complex network theory develop, non-cooperative games and their formation and application on complex networks have been important research topics. In the inter-firm innovation network, it is a typical game behavior for firms to invest in their alliance partners. Accounting for the possibility that firms can be resource constrained, this paper analyzes a coordination game using the Nash bargaining solution as allocation rules between firms in an inter-firm innovation network. We build an extended inter-firm n-player game based on nonidealized conditions, describe four investment strategies and simulate the strategies on an inter-firm innovation network in order to compare their performance. By analyzing the results of our experiments, we find that our proposed greedy strategy is the best-performing in most situations. We hope this study provides a theoretical insight into how firms make investment decisions.

  9. Under-Actuated Robot Manipulator Positioning Control Using Artificial Neural Network Inversion Technique

    Directory of Open Access Journals (Sweden)

    Ali T. Hasan

    2012-01-01

    Full Text Available This paper is devoted to solve the positioning control problem of underactuated robot manipulator. Artificial Neural Networks Inversion technique was used where a network represents the forward dynamics of the system trained to learn the position of the passive joint over the working space of a 2R underactuated robot. The obtained weights from the learning process were fixed, and the network was inverted to represent the inverse dynamics of the system and then used in the estimation phase to estimate the position of the passive joint for a new set of data the network was not previously trained for. Data used in this research are recorded experimentally from sensors fixed on the robot joints in order to overcome whichever uncertainties presence in the real world such as ill-defined linkage parameters, links flexibility, and backlashes in gear trains. Results were verified experimentally to show the success of the proposed control strategy.

  10. An introduction to network modeling and simulation for the practicing engineer

    CERN Document Server

    Burbank, Jack; Ward, Jon

    2011-01-01

    This book provides the practicing engineer with a concise listing of commercial and open-source modeling and simulation tools currently available including examples of implementing those tools for solving specific Modeling and Simulation examples. Instead of focusing on the underlying theory of Modeling and Simulation and fundamental building blocks for custom simulations, this book compares platforms used in practice, and gives rules enabling the practicing engineer to utilize available Modeling and Simulation tools. This book will contain insights regarding common pitfalls in network Modeling and Simulation and practical methods for working engineers.

  11. A Grid-Free Approach for Plasma Simulations (Grid-Free Plasma Simulation Techniques)

    Science.gov (United States)

    2007-07-10

    titles are listed below. The papers will be sent to the program manager, Major David Byers, upon completion. Christlieb, A.J.; Olson, S.E.; Gridless...W. Hockney and J. W. Eastwood, Computer Simulation Using Parti- Fluid Mech., vol. 184, pp. 123-155, 1987. cles. Bristol, U.K.: lOP Publishing, 1988

  12. A system identification technique based on the random decrement signatures. Part 1: Theory and simulation

    Science.gov (United States)

    Bedewi, Nabih E.; Yang, Jackson C. S.

    1987-01-01

    Identification of the system parameters of a randomly excited structure may be treated using a variety of statistical techniques. Of all these techniques, the Random Decrement is unique in that it provides the homogeneous component of the system response. Using this quality, a system identification technique was developed based on a least-squares fit of the signatures to estimate the mass, damping, and stiffness matrices of a linear randomly excited system. The mathematics of the technique is presented in addition to the results of computer simulations conducted to demonstrate the prediction of the response of the system and the random forcing function initially introduced to excite the system.

  13. Comparison of Neural Network Error Measures for Simulation of Slender Marine Structures

    DEFF Research Database (Denmark)

    Christiansen, Niels H.; Voie, Per Erlend Torbergsen; Winther, Ole

    2014-01-01

    platform is designed and tested. The purpose of setting up the network is to reduce calculation time in a fatigue life analysis. Therefore, the networks trained on different error functions are compared with respect to accuracy of rain flow counts of stress cycles over a number of time series simulations......Training of an artificial neural network (ANN) adjusts the internal weights of the network in order to minimize a predefined error measure. This error measure is given by an error function. Several different error functions are suggested in the literature. However, the far most common measure...... for regression is the mean square error. This paper looks into the possibility of improving the performance of neural networks by selecting or defining error functions that are tailor-made for a specific objective. A neural network trained to simulate tension forces in an anchor chain on a floating offshore...

  14. Application of Neural Network and Simulation Modeling to Evaluate Russian Banks’ Performance

    OpenAIRE

    Sharma, Satish; Shebalkov, Mikhail

    2013-01-01

    This paper presents an application of neural network and simulation modeling to analyze and predict the performance of 883 Russian Banks over the period 2000-2010. Correlation analysis was performed to obtain key financial indicators which reflect the leverage, liquidity, profitability and size of Banks. Neural network was trained over the entire dataset, and then simulation modeling was performed generating values which are distributed with Largest Extreme Value and Loglogistic distributions...

  15. High capacity fiber optic sensor networks using hybrid multiplexing techniques and their applications

    Science.gov (United States)

    Sun, Qizhen; Li, Xiaolei; Zhang, Manliang; Liu, Qi; Liu, Hai; Liu, Deming

    2013-12-01

    Fiber optic sensor network is the development trend of fiber senor technologies and industries. In this paper, I will discuss recent research progress on high capacity fiber sensor networks with hybrid multiplexing techniques and their applications in the fields of security monitoring, environment monitoring, Smart eHome, etc. Firstly, I will present the architecture of hybrid multiplexing sensor passive optical network (HSPON), and the key technologies for integrated access and intelligent management of massive fiber sensor units. Two typical hybrid WDM/TDM fiber sensor networks for perimeter intrusion monitor and cultural relics security are introduced. Secondly, we propose the concept of "Microstructure-Optical X Domin Refecltor (M-OXDR)" for fiber sensor network expansion. By fabricating smart micro-structures with the ability of multidimensional encoded and low insertion loss along the fiber, the fiber sensor network of simple structure and huge capacity more than one thousand could be achieved. Assisted by the WDM/TDM and WDM/FDM decoding methods respectively, we built the verification systems for long-haul and real-time temperature sensing. Finally, I will show the high capacity and flexible fiber sensor network with IPv6 protocol based hybrid fiber/wireless access. By developing the fiber optic sensor with embedded IPv6 protocol conversion module and IPv6 router, huge amounts of fiber optic sensor nodes can be uniquely addressed. Meanwhile, various sensing information could be integrated and accessed to the Next Generation Internet.

  16. FNCS: A Framework for Power System and Communication Networks Co-Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ciraci, Selim; Daily, Jeffrey A.; Fuller, Jason C.; Fisher, Andrew R.; Marinovici, Laurentiu D.; Agarwal, Khushbu

    2014-04-13

    This paper describes the Fenix framework that uses a federated approach for integrating power grid and communication network simulators. Compared existing approaches, Fenix al- lows co-simulation of both transmission and distribution level power grid simulators with the communication network sim- ulator. To reduce the performance overhead of time synchro- nization, Fenix utilizes optimistic synchronization strategies that make speculative decisions about when the simulators are going to exchange messages. GridLAB-D (a distribution simulator), PowerFlow (a transmission simulator), and ns-3 (a telecommunication simulator) are integrated with the frame- work and are used to illustrate the enhanced performance pro- vided by speculative multi-threading on a smart grid applica- tion. Our speculative multi-threading approach achieved on average 20% improvement over the existing synchronization methods

  17. Smart techniques in the dynamic spectrum alocation for cognitive wireless networks

    Directory of Open Access Journals (Sweden)

    Camila Salgado

    2016-09-01

    Full Text Available Objective: The objective of this work is to study the applications of different techniques of artificial intelligence and autonomous learning in the dynamic allocation of spectrum for cognitive wireless networks, especially the distributed ones. Method: The development of this work was done through the study and analysis of some of the most relevant publications in current literature through the search in indexed international journals in ISI and Scopus. Results: the most relevant techniques of artificial intelligence and autonomous learning were determined. Also, the ones with more applicability in the allocation of spectrum for cognitive wireless networks were determined, too. . Conclusions: The implementation of a technique, or set of them, depends on the needs in signal processing, compensation in response times, sample availability, storage capacity, learning ability and robustness, among others.

  18. Evaluation Technique of Chloride Penetration Using Apparent Diffusion Coefficient and Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Yun-Yong Kim

    2014-01-01

    Full Text Available Diffusion coefficient from chloride migration test is currently used; however this cannot provide a conventional solution like total chloride contents since it depicts only ion migration velocity in electrical field. This paper proposes a simple analysis technique for chloride behavior using apparent diffusion coefficient from neural network algorithm with time-dependent diffusion phenomena. For this work, thirty mix proportions of high performance concrete are prepared and their diffusion coefficients are obtained after long term-NaCl submerged test. Considering time-dependent diffusion coefficient based on Fick’s 2nd Law and NNA (neural network algorithm, analysis technique for chloride penetration is proposed. The applicability of the proposed technique is verified through the results from accelerated test, long term submerged test, and field investigation results.

  19. Less Developed Countries Energy System Network Simulator, LDC-ESNS: a brief description

    Energy Technology Data Exchange (ETDEWEB)

    Reisman, A; Malone, R

    1978-04-01

    Prepared for the Brookhaven National Laboratory Developing Countries Energy Program, this report describes the Less Developed Countries Energy System Network Simulator (LDC-ESNS), a tool which provides a quantitative representation of the energy system of an LDC. The network structure of the energy supply and demand system, the model inputs and outputs, and the possible uses of the model for analysis are described.

  20. An overview of uncertainty quantification techniques with application to oceanic and oil-spill simulations

    KAUST Repository

    Iskandarani, Mohamed

    2016-04-22

    We give an overview of four different ensemble-based techniques for uncertainty quantification and illustrate their application in the context of oil plume simulations. These techniques share the common paradigm of constructing a model proxy that efficiently captures the functional dependence of the model output on uncertain model inputs. This proxy is then used to explore the space of uncertain inputs using a large number of samples, so that reliable estimates of the model\\'s output statistics can be calculated. Three of these techniques use polynomial chaos (PC) expansions to construct the model proxy, but they differ in their approach to determining the expansions\\' coefficients; the fourth technique uses Gaussian Process Regression (GPR). An integral plume model for simulating the Deepwater Horizon oil-gas blowout provides examples for illustrating the different techniques. A Monte Carlo ensemble of 50,000 model simulations is used for gauging the performance of the different proxies. The examples illustrate how regression-based techniques can outperform projection-based techniques when the model output is noisy. They also demonstrate that robust uncertainty analysis can be performed at a fraction of the cost of the Monte Carlo calculation.

  1. COMPUTER DYNAMICS SIMULATION OF DRUG DEPENDENCE THROUGH ARTIFICIAL NEURONAL NETWORK: PEDAGOGICAL AND CLINICAL IMPLICATIONS

    Directory of Open Access Journals (Sweden)

    G. SANTOS

    2008-05-01

    Full Text Available To develop and to evaluate the efficiency of a software able to simulate a virtual patient at different stages of addition was the main goal and challenge of this work. We developed the software in Borland™ Delphi  5®  programming language. Techniques of artificial intelligence, neuronal networks and expert systems, were responsible for modeling the neurobiological structures and mechanisms of the interaction with the drugs used. Dynamical simulation and  hypermedia were designed to increase the software’s interactivity which was able to show graphical information from virtual instrumentation and from realistic functional magnetic resonance imaging display. Early, the program was designed to be used by undergraduate students to improve their neurophysiologic learn, based not only in an interaction of membrane receptors with drugs, but in such a large behavioral simulation. The experimental manipulation of the software was accomplished by: i creating a virtual patient from a normal mood to a behavioral addiction, increasing gradatively: alcohol, opiate or cocaine doses. ii designing an approach to treat the patient, to get total or partial remission of behavioral disorder by combining psychopharmacology and psychotherapy. Integration of dynamic simulation with hypermedia and artificial intelligence has been able to point out behavioral details as tolerance, sensitization and level of addiction to drugs of abuse and so on, turned into a potentially useful tool in the development of teaching activities on several ways, such as education as well clinical skills, in which it could assist patients, families and health care to improve and test their knowledge and skills about different faces supported by drugs dependency. Those features are currently under investigation.

  2. Human metabolic network: reconstruction, simulation, and applications in systems biology.

    Science.gov (United States)

    Wu, Ming; Chan, Christina

    2012-03-02

    Metabolism is crucial to cell growth and proliferation. Deficiency or alterations in metabolic functions are known to be involved in many human diseases. Therefore, understanding the human metabolic system is important for the study and treatment of complex diseases. Current reconstructions of the global human metabolic network provide a computational platform to integrate genome-scale information on metabolism. The platform enables a systematic study of the regulation and is applicable to a wide variety of cases, wherein one could rely on in silico perturbations to predict novel targets, interpret systemic effects, and identify alterations in the metabolic states to better understand the genotype-phenotype relationships. In this review, we describe the reconstruction of the human metabolic network, introduce the constraint based modeling approach to analyze metabolic networks, and discuss systems biology applications to study human physiology and pathology. We highlight the challenges and opportunities in network reconstruction and systems modeling of the human metabolic system.

  3. Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.

    Science.gov (United States)

    Lee, Won Hee; Bullmore, Ed; Frangou, Sophia

    2017-02-01

    There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Aerodynamic design of a space vehicle using the numerical simulation technique

    OpenAIRE

    Yamamoto, Yukimitsu; Wada, Yasuhiro; Takanashi, Susumu; Ishiguro, Mitsuo; 山本 行光; 和田 安弘; 高梨 進; 石黒 満津夫

    1994-01-01

    Optimization or the aerodynamic configuration or a space vehicle 'HOPE' (H-2 Orbiting Plane) is conducted by using several numerical simulation codes in the transonic and hypersonic speed ranges. Design requirements are set on the longitudinal aerodynamic characteristics in the transonic speed and the aerodynamic heat characteristics in the hypersonic speed. This paper describes the procedure or the optimization or aerodynamic configurations by using the numerical simulation technique as an e...

  5. Comparison of phase noise simulation techniques on a BJT LC oscillator.

    Science.gov (United States)

    Forbes, Leonard; Zhang, Chengwei; Zhang, Binglei; Chandra, Yudi

    2003-06-01

    The phase noise resulting from white and flicker noise in a bipolar junction transistor (BJT) LC oscillator is investigated. Large signal transient time domain SPICE simulations of phase noise resulting from the random-phase flicker and white noise in a 2 GHz BJT LC oscillator have been performed and demonstrated. The simulation results of this new technique are compared with Eldo RF and Spectre RF based on linear circuit concepts and experimental result reported in the literature.

  6. Dynamic Simulations & Animations of the Classical Control Techniques with Linear Transformations

    OpenAIRE

    Ahmet ALTINTAŞ; Güven, Mehmet

    2010-01-01

    Teaching and learning techniques using computer-based resources greatly improve the effectiveness and efficiency of the learning process. Currently, there are a lot of simulation and animation packages in use, and some of them are developed for educational purposes. The dynamic simulations-animations (DSA) allow us to see physical movement of the different pieces according to the modeled system. Education-purposed packages cannot be sufficiently flexible in different branches of  sci...

  7. Analytical vs. Simulation Solution Techniques for Pulse Problems in Non-linear Stochastic Dynamics

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R. K.

    Advantages and disadvantages of available analytical and simulation techniques for pulse problems in non-linear stochastic dynamics are discussed. First, random pulse problems, both those which do and do not lead to Markov theory, are presented. Next, the analytical and analytically-numerical tec......Advantages and disadvantages of available analytical and simulation techniques for pulse problems in non-linear stochastic dynamics are discussed. First, random pulse problems, both those which do and do not lead to Markov theory, are presented. Next, the analytical and analytically...

  8. Simulation and Statistical Inference of Stochastic Reaction Networks with Applications to Epidemic Models

    KAUST Repository

    Moraes, Alvaro

    2015-01-01

    Epidemics have shaped, sometimes more than wars and natural disasters, demo- graphic aspects of human populations around the world, their health habits and their economies. Ebola and the Middle East Respiratory Syndrome (MERS) are clear and current examples of potential hazards at planetary scale. During the spread of an epidemic disease, there are phenomena, like the sudden extinction of the epidemic, that can not be captured by deterministic models. As a consequence, stochastic models have been proposed during the last decades. A typical forward problem in the stochastic setting could be the approximation of the expected number of infected individuals found in one month from now. On the other hand, a typical inverse problem could be, given a discretely observed set of epidemiological data, infer the transmission rate of the epidemic or its basic reproduction number. Markovian epidemic models are stochastic models belonging to a wide class of pure jump processes known as Stochastic Reaction Networks (SRNs), that are intended to describe the time evolution of interacting particle systems where one particle interacts with the others through a finite set of reaction channels. SRNs have been mainly developed to model biochemical reactions but they also have applications in neural networks, virus kinetics, and dynamics of social networks, among others. 4 This PhD thesis is focused on novel fast simulation algorithms and statistical inference methods for SRNs. Our novel Multi-level Monte Carlo (MLMC) hybrid simulation algorithms provide accurate estimates of expected values of a given observable of SRNs at a prescribed final time. They are designed to control the global approximation error up to a user-selected accuracy and up to a certain confidence level, and with near optimal computational work. We also present novel dual-weighted residual expansions for fast estimation of weak and strong errors arising from the MLMC methodology. Regarding the statistical inference

  9. Fuzzy-Based Adaptive Hybrid Burst Assembly Technique for Optical Burst Switched Networks

    Directory of Open Access Journals (Sweden)

    Abubakar Muhammad Umaru

    2014-01-01

    Full Text Available The optical burst switching (OBS paradigm is perceived as an intermediate switching technology for future all-optical networks. Burst assembly that is the first process in OBS is the focus of this paper. In this paper, an intelligent hybrid burst assembly algorithm that is based on fuzzy logic is proposed. The new algorithm is evaluated against the traditional hybrid burst assembly algorithm and the fuzzy adaptive threshold (FAT burst assembly algorithm via simulation. Simulation results show that the proposed algorithm outperforms the hybrid and the FAT algorithms in terms of burst end-to-end delay, packet end-to-end delay, and packet loss ratio.

  10. How Network Properties Affect One's Ability to Obtain Benefits: A Network Simulation

    Science.gov (United States)

    Trefalt, Špela

    2014-01-01

    Networks and the social capital that they carry enable people to get things done, to prosper in their careers, and to feel supported. To develop an effective network, one needs to know more than how to make connections with strangers at a reception; understanding the consequences of network properties on one's ability to obtain benefits is…

  11. Development of a Car Racing Simulator Game Using Artificial Intelligence Techniques

    Directory of Open Access Journals (Sweden)

    Marvin T. Chan

    2015-01-01

    Full Text Available This paper presents a car racing simulator game called Racer, in which the human player races a car against three game-controlled cars in a three-dimensional environment. The objective of the game is not to defeat the human player, but to provide the player with a challenging and enjoyable experience. To ensure that this objective can be accomplished, the game incorporates artificial intelligence (AI techniques, which enable the cars to be controlled in a manner that mimics natural driving. The paper provides a brief history of AI techniques in games, presents the use of AI techniques in contemporary video games, and discusses the AI techniques that were implemented in the development of Racer. A comparison of the AI techniques implemented in the Unity platform with traditional AI search techniques is also included in the discussion.

  12. Modeling a secular trend by Monte Carlo simulation of height biased migration in a spatial network.

    Science.gov (United States)

    Groth, Detlef

    2017-04-01

    Background: In a recent Monte Carlo simulation, the clustering of body height of Swiss military conscripts within a spatial network with characteristic features of the natural Swiss geography was investigated. In this study I examined the effect of migration of tall individuals into network hubs on the dynamics of body height within the whole spatial network. The aim of this study was to simulate height trends. Material and methods: Three networks were used for modeling, a regular rectangular fishing net like network, a real world example based on the geographic map of Switzerland, and a random network. All networks contained between 144 and 148 districts and between 265-307 road connections. Around 100,000 agents were initially released with average height of 170 cm, and height standard deviation of 6.5 cm. The simulation was started with the a priori assumption that height variation within a district is limited and also depends on height of neighboring districts (community effect on height). In addition to a neighborhood influence factor, which simulates a community effect, body height dependent migration of conscripts between adjacent districts in each Monte Carlo simulation was used to re-calculate next generation body heights. In order to determine the direction of migration for taller individuals, various centrality measures for the evaluation of district importance within the spatial network were applied. Taller individuals were favored to migrate more into network hubs, backward migration using the same number of individuals was random, not biased towards body height. Network hubs were defined by the importance of a district within the spatial network. The importance of a district was evaluated by various centrality measures. In the null model there were no road connections, height information could not be delivered between the districts. Results: Due to the favored migration of tall individuals into network hubs, average body height of the hubs, and later

  13. D-LiTE: A platform for evaluating DASH performance over a simulated LTE network

    OpenAIRE

    Quinlan, Jason J.; Raca, Darijo; Zahran, Ahmed H.; Khalid, Ahmed; Ramakrishnan, K. K.; Sreenan, Cormac J.

    2015-01-01

    In this demonstration we present a platform that encompasses all of the components required to realistically evaluate the performance of Dynamic Adaptive Streaming over HTTP (DASH) over a real-time NS-3 simulated network. Our platform consists of a network-attached storage server with DASH video clips and a simulated LTE network which utilises the NS-3 LTE module provided by the LENA project. We stream to clients running an open-source player with a choice of adaptation algorithms. By providi...

  14. ergm: A Package to Fit, Simulate and Diagnose Exponential-Family Models for Networks

    Directory of Open Access Journals (Sweden)

    David R. Hunter

    2008-12-01

    Full Text Available We describe some of the capabilities of the ergm package and the statistical theory underlying it. This package contains tools for accomplishing three important, and inter-related, tasks involving exponential-family random graph models (ERGMs: estimation, simulation, and goodness of fit. More precisely, ergm has the capability of approximating a maximum likelihood estimator for an ERGM given a network data set; simulating new network data sets from a fitted ERGM using Markov chain Monte Carlo; and assessing how well a fitted ERGM does at capturing characteristics of a particular network data set.

  15. Double and multiple knockout simulations for genome-scale metabolic network reconstructions.

    Science.gov (United States)

    Goldstein, Yaron Ab; Bockmayr, Alexander

    2015-01-01

    Constraint-based modeling of genome-scale metabolic network reconstructions has become a widely used approach in computational biology. Flux coupling analysis is a constraint-based method that analyses the impact of single reaction knockouts on other reactions in the network. We present an extension of flux coupling analysis for double and multiple gene or reaction knockouts, and develop corresponding algorithms for an in silico simulation. To evaluate our method, we perform a full single and double knockout analysis on a selection of genome-scale metabolic network reconstructions and compare the results. A prototype implementation of double knockout simulation is available at http://hoverboard.io/L4FC.

  16. An Interference-Aware Distributed Transmission Technique for Dense Small Cell Networks

    DEFF Research Database (Denmark)

    Mahmood, Nurul Huda; Berardinelli, Gilberto; Pedersen, Klaus I.

    2015-01-01

    transmission technique that can efficiently manage the interference in an uncoordinated dense small cell network is investigated in this work. The proposed interference aware scheme only requires instantaneous channel state information at the transmitter end towards the desired receiver. Motivated by penalty...... methods in optimization studies, an interference dependent weighting factor is introduced to control the number of parallel transmission streams. The proposed scheme can outperform a more complex benchmark transmission scheme in terms of the sum network throughput in certain scenarios and with realistic...

  17. Optimizing targeted vaccination across cyber-physical networks: an empirically based mathematical simulation study

    DEFF Research Database (Denmark)

    Mones, Enys; Stopczynski, Arkadiusz; Pentland, Alex 'Sandy'

    2018-01-01

    . If interruption of disease transmission is the goal, targeting requires knowledge of underlying person-to-person contact networks. Digital communication networks may reflect not only virtual but also physical interactions that could result in disease transmission, but the precise overlap between these cyber...... and physical networks has never been empirically explored in real-life settings. Here, we study the digital communication activity of more than 500 individuals along with their person-to-person contacts at a 5-min temporal resolution. We then simulate different disease transmission scenarios on the person......-to-person physical contact network to determine whether cyber communication networks can be harnessed to advance the goal of targeted vaccination for a disease spreading on the network of physical proximity. We show that individuals selected on the basis of their closeness centrality within cyber networks (what we...

  18. Towards Interactive Medical Content Delivery Between Simulated Body Sensor Networks and Practical Data Center.

    Science.gov (United States)

    Shi, Xiaobo; Li, Wei; Song, Jeungeun; Hossain, M Shamim; Mizanur Rahman, Sk Md; Alelaiwi, Abdulhameed

    2016-10-01

    With the development of IoT (Internet of Thing), big data analysis and cloud computing, traditional medical information system integrates with these new technologies. The establishment of cloud-based smart healthcare application gets more and more attention. In this paper, semi-physical simulation technology is applied to cloud-based smart healthcare system. The Body sensor network (BSN) of system transmit has two ways of data collection and transmission. The one is using practical BSN to collect data and transmitting it to the data center. The other is transmitting real medical data to practical data center by simulating BSN. In order to transmit real medical data to practical data center by simulating BSN under semi-physical simulation environment, this paper designs an OPNET packet structure, defines a gateway node model between simulating BSN and practical data center and builds a custom protocol stack. Moreover, this paper conducts a large amount of simulation on the real data transmission through simulation network connecting with practical network. The simulation result can provides a reference for parameter settings of fully practical network and reduces the cost of devices and personnel involved.

  19. Increasing Learner Retention in a Simulated Learning Network using Indirect Social Interaction

    NARCIS (Netherlands)

    Koper, Rob

    2004-01-01

    Please refer to original publication: Koper, E.J.R. (2005). Increasing Learner Retention in a Simulated Learning Network Using Indirect Social Interaction. Journal of Artificial Societies and Social Simulation vol. 8, no. 2. http://jasss.soc.surrey.ac.uk/8/2/5.html Software is only stored to ensure

  20. Simulation and experimental study of 802.11 based networking for vehicular management and safety.

    Science.gov (United States)

    2009-03-01

    This work focuses on the use of wireless networking techniques for their potential impact in providing : information for traffic management, control and public safety goals. The premise of this work is based on the : reasonable expectation that vehic...

  1. Pithy Review on Routing Protocols in Wireless Sensor Networks and Least Routing Time Opportunistic Technique in WSN

    Science.gov (United States)

    Salman Arafath, Mohammed; Rahman Khan, Khaleel Ur; Sunitha, K. V. N.

    2018-01-01

    Nowadays due to most of the telecommunication standard development organizations focusing on using device-to-device communication so that they can provide proximity-based services and add-on services on top of the available cellular infrastructure. An Oppnets and wireless sensor network play a prominent role here. Routing in these networks plays a significant role in fields such as traffic management, packet delivery etc. Routing is a prodigious research area with diverse unresolved issues. This paper firstly focuses on the importance of Opportunistic routing and its concept then focus is shifted to prime aspect i.e. on packet reception ratio which is one of the highest QoS Awareness parameters. This paper discusses the two important functions of routing in wireless sensor networks (WSN) namely route selection using least routing time algorithm (LRTA) and data forwarding using clustering technique. Finally, the simulation result reveals that LRTA performs relatively better than the existing system in terms of average packet reception ratio and connectivity.

  2. Modeling and numerical techniques for high-speed digital simulation of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Wulff, W.; Cheng, H.S.; Mallen, A.N.

    1987-01-01

    Conventional computing methods are contrasted with newly developed high-speed and low-cost computing techniques for simulating normal and accidental transients in nuclear power plants. Six principles are formulated for cost-effective high-fidelity simulation with emphasis on modeling of transient two-phase flow coolant dynamics in nuclear reactors. Available computing architectures are characterized. It is shown that the combination of the newly developed modeling and computing principles with the use of existing special-purpose peripheral processors is capable of achieving low-cost and high-speed simulation with high-fidelity and outstanding user convenience, suitable for detailed reactor plant response analyses.

  3. Simulation and analysis of natural rain in a wind tunnel via digital image processing techniques

    Science.gov (United States)

    Aaron, K. M.; Hernan, M.; Parikh, P.; Sarohia, V.; Gharib, M.

    1986-01-01

    It is desired to simulate natural rain in a wind tunnel in order to investigate its influence on the aerodynamic characteristics of aircraft. Rain simulation nozzles have been developed and tested at JPL. Pulsed laser sheet illumination is used to photograph the droplets in the moving airstream. Digital image processing techniques are applied to these photographs for calculation of rain statistics to evaluate the performance of the nozzles. It is found that fixed hypodermic type nozzles inject too much water to simulate natural rain conditions. A modification uses two aerodynamic spinners to flex a tube in a pseudo-random fashion to distribute the water over a larger area.

  4. Development of a pore network simulation model to study nonaqueous phase liquid dissolution

    Science.gov (United States)

    Dillard, Leslie A.; Blunt, Martin J.

    2000-01-01

    A pore network simulation model was developed to investigate the fundamental physics of nonequilibrium nonaqueous phase liquid (NAPL) dissolution. The network model is a lattice of cubic chambers and rectangular tubes that represent pore bodies and pore throats, respectively. Experimental data obtained by Powers [1992] were used to develop and validate the model. To ensure the network model was representative of a real porous medium, the pore size distribution of the network was calibrated by matching simulated and experimental drainage and imbibition capillary pressure-saturation curves. The predicted network residual styrene blob-size distribution was nearly identical to the observed distribution. The network model reproduced the observed hydraulic conductivity and produced relative permeability curves that were representative of a poorly consolidated sand. Aqueous-phase transport was represented by applying the equation for solute flux to the network tubes and solving for solute concentrations in the network chambers. Complete mixing was found to be an appropriate approximation for calculation of chamber concentrations. Mass transfer from NAPL blobs was represented using a corner diffusion model. Predicted results of solute concentration versus Peclet number and of modified Sherwood number versus Peclet number for the network model compare favorably with experimental data for the case in which NAPL blob dissolution was negligible. Predicted results of normalized effluent concentration versus pore volume for the network were similar to the experimental data for the case in which NAPL blob dissolution occurred with time.

  5. Efficient Pricing Technique for Resource Allocation Problem in Downlink OFDM Cognitive Radio Networks

    Science.gov (United States)

    Abdulghafoor, O. B.; Shaat, M. M. R.; Ismail, M.; Nordin, R.; Yuwono, T.; Alwahedy, O. N. A.

    2017-05-01

    In this paper, the problem of resource allocation in OFDM-based downlink cognitive radio (CR) networks has been proposed. The purpose of this research is to decrease the computational complexity of the resource allocation algorithm for downlink CR network while concerning the interference constraint of primary network. The objective has been secured by adopting pricing scheme to develop power allocation algorithm with the following concerns: (i) reducing the complexity of the proposed algorithm and (ii) providing firm power control to the interference introduced to primary users (PUs). The performance of the proposed algorithm is tested for OFDM- CRNs. The simulation results show that the performance of the proposed algorithm approached the performance of the optimal algorithm at a lower computational complexity, i.e., O(NlogN), which makes the proposed algorithm suitable for more practical applications.

  6. Virtual X-ray imaging techniques in an immersive casting simulation environment

    Science.gov (United States)

    Li, Ning; Kim, Sung-Hee; Suh, Ji-Hyun; Cho, Sang-Hyun; Choi, Jung-Gil; Kim, Myoung-Hee

    2007-08-01

    A computer code was developed to simulate radiograph of complex casting products in a CAVE TM-like environment. The simulation is based on the deterministic algorithms and ray tracing techniques. The aim of this study is to examine CAD/CAE/CAM models at the design stage, to optimize the design and inspect predicted defective regions with fast speed, good accuracy and small numerical expense. The present work discusses the algorithms for the radiography simulation of CAD/CAM model and proposes algorithmic solutions adapted from ray-box intersection algorithm and octree data structure specifically for radiographic simulation of CAE model. The stereoscopic visualization of full-size of product in the immersive casting simulation environment as well as the virtual X-ray images of castings provides an effective tool for design and evaluation of foundry processes by engineers and metallurgists.

  7. Generating Inviscid and Viscous Fluid Flow Simulations over a Surface Using a Quasi-simultaneous Technique

    Science.gov (United States)

    Sturdza, Peter (Inventor); Martins-Rivas, Herve (Inventor); Suzuki, Yoshifumi (Inventor)

    2014-01-01

    A fluid-flow simulation over a computer-generated surface is generated using a quasi-simultaneous technique. The simulation includes a fluid-flow mesh of inviscid and boundary-layer fluid cells. An initial fluid property for an inviscid fluid cell is determined using an inviscid fluid simulation that does not simulate fluid viscous effects. An initial boundary-layer fluid property a boundary-layer fluid cell is determined using the initial fluid property and a viscous fluid simulation that simulates fluid viscous effects. An updated boundary-layer fluid property is determined for the boundary-layer fluid cell using the initial fluid property, initial boundary-layer fluid property, and an interaction law. The interaction law approximates the inviscid fluid simulation using a matrix of aerodynamic influence coefficients computed using a two-dimensional surface panel technique and a fluid-property vector. An updated fluid property is determined for the inviscid fluid cell using the updated boundary-layer fluid property.

  8. ANALYSIS OF MONTE CARLO SIMULATION SAMPLING TECHNIQUES ON SMALL SIGNAL STABILITY OF WIND GENERATOR- CONNECTED POWER SYSTEM

    Directory of Open Access Journals (Sweden)

    TEMITOPE RAPHAEL AYODELE

    2016-04-01

    Full Text Available Monte Carlo simulation using Simple Random Sampling (SRS technique is popularly known for its ability to handle complex uncertainty problems. However, to produce a reasonable result, it requires huge sample size. This makes it to be computationally expensive, time consuming and unfit for online power system applications. In this article, the performance of Latin Hypercube Sampling (LHS technique is explored and compared with SRS in term of accuracy, robustness and speed for small signal stability application in a wind generator-connected power system. The analysis is performed using probabilistic techniques via eigenvalue analysis on two standard networks (Single Machine Infinite Bus and IEEE 16–machine 68 bus test system. The accuracy of the two sampling techniques is determined by comparing their different sample sizes with the IDEAL (conventional. The robustness is determined based on a significant variance reduction when the experiment is repeated 100 times with different sample sizes using the two sampling techniques in turn. Some of the results show that sample sizes generated from LHS for small signal stability application produces the same result as that of the IDEAL values starting from 100 sample size. This shows that about 100 sample size of random variable generated using LHS method is good enough to produce reasonable results for practical purpose in small signal stability application. It is also revealed that LHS has the least variance when the experiment is repeated 100 times compared to SRS techniques. This signifies the robustness of LHS over that of SRS techniques. 100 sample size of LHS produces the same result as that of the conventional method consisting of 50000 sample size. The reduced sample size required by LHS gives it computational speed advantage (about six times over the conventional method.

  9. Numerical simulation of fibrous biomaterials with randomly distributed fiber network structure.

    Science.gov (United States)

    Jin, Tao; Stanciulescu, Ilinca

    2016-08-01

    This paper presents a computational framework to simulate the mechanical behavior of fibrous biomaterials with randomly distributed fiber networks. A random walk algorithm is implemented to generate the synthetic fiber network in 2D used in simulations. The embedded fiber approach is then adopted to model the fibers as embedded truss elements in the ground matrix, which is essentially equivalent to the affine fiber kinematics. The fiber-matrix interaction is partially considered in the sense that the two material components deform together, but no relative movement is considered. A variational approach is carried out to derive the element residual and stiffness matrices for finite element method (FEM), in which material and geometric nonlinearities are both included. Using a data structure proposed to record the network geometric information, the fiber network is directly incorporated into the FEM simulation without significantly increasing the computational cost. A mesh sensitivity analysis is conducted to show the influence of mesh size on various simulation results. The proposed method can be easily combined with Monte Carlo (MC) simulations to include the influence of the stochastic nature of the network and capture the material behavior in an average sense. The computational framework proposed in this work goes midway between homogenizing the fiber network into the surrounding matrix and accounting for the fully coupled fiber-matrix interaction at the segment length scale, and can be used to study the connection between the microscopic structure and the macro-mechanical behavior of fibrous biomaterials with a reasonable computational cost.

  10. Social Networks and Smoking: Exploring the Effects of Peer Influence and Smoker Popularity through Simulations

    Science.gov (United States)

    Schaefer, David R.; adams, jimi; Haas, Steven A.

    2013-01-01

    Adolescent smoking and friendship networks are related in many ways that can amplify smoking prevalence. Understanding and developing interventions within such a complex system requires new analytic approaches. We draw on recent advances in dynamic network modeling to develop a technique that explores the implications of various intervention…

  11. A simulated annealing heuristic for maximum correlation core/periphery partitioning of binary networks.

    Science.gov (United States)

    Brusco, Michael; Stolze, Hannah J; Hoffman, Michaela; Steinley, Douglas

    2017-01-01

    A popular objective criterion for partitioning a set of actors into core and periphery subsets is the maximization of the correlation between an ideal and observed structure associated with intra-core and intra-periphery ties. The resulting optimization problem has commonly been tackled using heuristic procedures such as relocation algorithms, genetic algorithms, and simulated annealing. In this paper, we present a computationally efficient simulated annealing algorithm for maximum correlation core/periphery partitioning of binary networks. The algorithm is evaluated using simulated networks consisting of up to 2000 actors and spanning a variety of densities for the intra-core, intra-periphery, and inter-core-periphery components of the network. Core/periphery analyses of problem solving, trust, and information sharing networks for the frontline employees and managers of a consumer packaged goods manufacturer are provided to illustrate the use of the model.

  12. Complex Network Simulation of Forest Network Spatial Pattern in Pearl River Delta

    Science.gov (United States)

    Zeng, Y.

    2017-09-01

    Forest network-construction uses for the method and model with the scale-free features of complex network theory based on random graph theory and dynamic network nodes which show a power-law distribution phenomenon. The model is suitable for ecological disturbance by larger ecological landscape Pearl River Delta consistent recovery. Remote sensing and GIS spatial data are available through the latest forest patches. A standard scale-free network node distribution model calculates the area of forest network's power-law distribution parameter value size; The recent existing forest polygons which are defined as nodes can compute the network nodes decaying index value of the network's degree distribution. The parameters of forest network are picked up then make a spatial transition to GIS real world models. Hence the connection is automatically generated by minimizing the ecological corridor by the least cost rule between the near nodes. Based on scale-free network node distribution requirements, select the number compared with less, a huge point of aggregation as a future forest planning network's main node, and put them with the existing node sequence comparison. By this theory, the forest ecological projects in the past avoid being fragmented, scattered disorderly phenomena. The previous regular forest networks can be reduced the required forest planting costs by this method. For ecological restoration of tropical and subtropical in south China areas, it will provide an effective method for the forest entering city project guidance and demonstration with other ecological networks (water, climate network, etc.) for networking a standard and base datum.

  13. A versatile framework for simulating the dynamic mechanical structure of cytoskeletal networks

    CERN Document Server

    Freedman, Simon L; Hocky, Glen M; Dinner, Aaron R

    2016-01-01

    Computer simulations can aid in our understanding of how collective materials properties emerge from interactions between simple constituents. Here, we introduce a coarse- grained model of networks of actin filaments, myosin motors, and crosslinking proteins that enables simulation at biologically relevant time and length scales. We demonstrate that the model, with a consistent parameterization, qualitatively and quantitatively captures a suite of trends observed experimentally, including the statistics of filament fluctuations, mechanical responses to shear, motor motilities, and network rearrangements. The model can thus serve as a platform for interpretation and design of cytoskeletal materials experiments, as well as for further development of simulations incorporating active elements.

  14. STOMP: A Software Architecture for the Design and Simulation UAV-Based Sensor Networks

    Energy Technology Data Exchange (ETDEWEB)

    Jones, E D; Roberts, R S; Hsia, T C S

    2002-10-28

    This paper presents the Simulation, Tactical Operations and Mission Planning (STOMP) software architecture and framework for simulating, controlling and communicating with unmanned air vehicles (UAVs) servicing large distributed sensor networks. STOMP provides hardware-in-the-loop capability enabling real UAVs and sensors to feedback state information, route data and receive command and control requests while interacting with other real or virtual objects thereby enhancing support for simulation of dynamic and complex events.

  15. Intelligent Electric Power Systems with Active-Adaptive Electric Networks: Challenges for Simulation Tools

    Directory of Open Access Journals (Sweden)

    Ufa Ruslan A.

    2015-01-01

    Full Text Available The motivation of the presented research is based on the needs for development of new methods and tools for adequate simulation of intelligent electric power systems with active-adaptive electric networks (IES including Flexible Alternating Current Transmission System (FACTS devices. The key requirements for the simulation were formed. The presented analysis of simulation results of IES confirms the need to use a hybrid modelling approach.

  16. Simulation, State Estimation and Control of Nonlinear Superheater Attemporator using Neural Networks

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Sørensen, O.

    2000-01-01

    This paper considers the use of neural networks for nonlinear state estimation, system identification and control. As a case study we use data taken from a nonlinear injection valve for a superheater attemporator at a power plant. One neural network is trained as a nonlinear simulation model......-by-sample linearizations and state estimates provided by the observer network. Simulation studies show that the nonlinear observer-based control loop performs better than a similar control loop based on a linear observer....... of the process, then another network is trained to act as a combined state and parameter estimator for the process. The observer network incorporates smoothing of the parameter estimates in the form of regularization. A pole placement controller is designed which takes advantage of the sample...

  17. Simulation, State Estimation and Control of Nonlinear Superheater Attemporator using Neural Networks

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Sørensen, O.

    1999-01-01

    This paper considers the use of neural networks for nonlinear state estimation, system identification and control. As a case study we use data taken from a nonlinear injection valve for a superheater attemporator at a power plant. One neural network is trained as a nonlinear simulation model......-by-sample linearizations and state estimates provided by the observer network. Simulation studies show that the nonlinear observer-based control loop performs better than a similar control loop based on a linear observer....... of the process, then another network is trained to act as a combined state and parameter estimator for the process. The observer network incorporates smoothing of the parameter estimates in the form of regularization. A pole placement controller is designed which takes advantage of the sample...

  18. A Model to Simulate Multimodality in a Mesoscopic Dynamic Network Loading Framework

    Directory of Open Access Journals (Sweden)

    Massimo Di Gangi

    2017-01-01

    Full Text Available A dynamic network loading (DNL model using a mesoscopic approach is proposed to simulate a multimodal transport network considering en-route change of the transport modes. The classic mesoscopic approach, where packets of users belonging to the same mode move following a path, is modified to take into account multiple modes interacting with each other, simultaneously and on the same multimodal network. In particular, to simulate modal change, functional aspects of multimodal arcs have been developed; those arcs are properly located on the network where modal change occurs and users are packed (or unpacked in a new modal resource that moves up to destination or to another multimodal arc. A test on a simple network reproducing a real situation is performed in order to show model peculiarities; some indicators, used to describe performances of the considered transport system, are shown.

  19. Efficiency of Software Testing Techniques: A Controlled Experiment Replication and Network Meta-analysis

    Directory of Open Access Journals (Sweden)

    Omar S. Gómez

    2017-07-01

    Full Text Available Background: Common approaches to software verification include static testing techniques, such as code reading, and dynamic testing techniques, such as black-box and white-box testing. Objective: With the aim of gaining a~better understanding of software testing techniques, a~controlled experiment replication and the synthesis of previous experiments which examine the efficiency of code reading, black-box and white-box testing techniques were conducted. Method: The replication reported here is composed of four experiments in which instrumented programs were used. Participants randomly applied one of the techniques to one of the instrumented programs. The outcomes were synthesized with seven experiments using the method of network meta-analysis (NMA. Results: No significant differences in the efficiency of the techniques were observed. However, it was discovered the instrumented programs had a~significant effect on the efficiency. The NMA results suggest that the black-box and white-box techniques behave alike; and the efficiency of code reading seems to be sensitive to other factors. Conclusion: Taking into account these findings, the Authors suggest that prior to carrying out software verification activities, software engineers should have a~clear understanding of the software product to be verified; they can apply either black-box or white-box testing techniques as they yield similar defect detection rates.

  20. Accelerating all-atom MD simulations of lipids using a modified virtual-sites technique

    DEFF Research Database (Denmark)

    Loubet, Bastien; Kopec, Wojciech; Khandelia, Himanshu

    2014-01-01

    We present two new implementations of the virtual sites technique which completely suppresses the degrees of freedom of the hydrogen atoms in a lipid bilayer allowing for an increased time step of 5 fs in all-atom simulations of the CHARMM36 force field. One of our approaches uses the derivation ...

  1. A Visual Analytics Technique for Identifying Heat Spots in Transportation Networks

    Directory of Open Access Journals (Sweden)

    Marian Sorin Nistor

    2016-12-01

    Full Text Available The decision takers of the public transportation system, as part of urban critical infrastructures, need to increase the system resilience. For doing so, we identified analysis tools for biological networks as an adequate basis for visual analytics in that domain. In the paper at hand we therefore translate such methods for transportation systems and show the benefits by applying them on the Munich subway network. Here, visual analytics is used to identify vulnerable stations from different perspectives. The applied technique is presented step by step. Furthermore, the key challenges in applying this technique on transportation systems are identified. Finally, we propose the implementation of the presented features in a management cockpit to integrate the visual analytics mantra for an adequate decision support on transportation systems.

  2. High Fidelity Simulations of Large-Scale Wireless Networks (Plus-Up)

    Energy Technology Data Exchange (ETDEWEB)

    Onunkwo, Uzoma [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    Sandia has built a strong reputation in scalable network simulation and emulation for cyber security studies to protect our nation’s critical information infrastructures. Georgia Tech has preeminent reputation in academia for excellence in scalable discrete event simulations, with strong emphasis on simulating cyber networks. Many of the experts in this field, such as Dr. Richard Fujimoto, Dr. George Riley, and Dr. Chris Carothers, have strong affiliations with Georgia Tech. The collaborative relationship that we intend to immediately pursue is in high fidelity simulations of practical large-scale wireless networks using ns-3 simulator via Dr. George Riley. This project will have mutual benefits in bolstering both institutions’ expertise and reputation in the field of scalable simulation for cyber-security studies. This project promises to address high fidelity simulations of large-scale wireless networks. This proposed collaboration is directly in line with Georgia Tech’s goals for developing and expanding the Communications Systems Center, the Georgia Tech Broadband Institute, and Georgia Tech Information Security Center along with its yearly Emerging Cyber Threats Report. At Sandia, this work benefits the defense systems and assessment area with promise for large-scale assessment of cyber security needs and vulnerabilities of our nation’s critical cyber infrastructures exposed to wireless communications.

  3. A Novel Architecture for Adaptive Traffic Control in Network on Chip using Code Division Multiple Access Technique

    OpenAIRE

    Fatemeh. Dehghani; Shahram. Darooei

    2016-01-01

    Network on chip has emerged as a long-term and effective method in Multiprocessor System-on-Chip communications in order to overcome the bottleneck in bus based communication architectures. Efficiency and performance of network on chip is so dependent on the architecture and structure of the network. In this paper a new structure and architecture for adaptive traffic control in network on chip using Code Division Multiple Access technique is presented. To solve the problem of synchronous acce...

  4. Efficient Heuristics for Simulating Population Overflow in Parallel Networks

    NARCIS (Netherlands)

    Zaburnenko, T.S.; Nicola, V.F.

    2006-01-01

    In this paper we propose a state-dependent importance sampling heuristic to estimate the probability of population overflow in networks of parallel queues. This heuristic approximates the “optimal��? state-dependent change of measure without the need for costly optimization involved in other

  5. Simulation of traffic capacity of inland waterway network

    NARCIS (Netherlands)

    Chen, L.; Mou, J.; Ligteringen, H.

    2013-01-01

    The inland waterborne transportation is viewed as an economic, safe and environmentally friendly alternative to the congested road network. The traffic capacity are the critical indicator of the inland shipping performance. Actually, interacted under the complicated factors, it is challenging to

  6. Numerical simulation with finite element and artificial neural network ...

    Indian Academy of Sciences (India)

    Further, this database after the neural network training; is used to analyse measured material properties of different test pieces. The ANN predictions are reconfirmed with contact type finite element analysis for an arbitrary selected test sample. The methodology evolved in this work can be extended to predict material ...

  7. Probabilistic low-rank factorization accelerates tensor network simulations of critical quantum many-body ground states

    Science.gov (United States)

    Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B.; Tamascelli, Dario; Montangero, Simone

    2018-01-01

    We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.

  8. Simulation of Strong Ground Motion of the 2009 Bhutan Earthquake Using Modified Semi-Empirical Technique

    Science.gov (United States)

    Sandeep; Joshi, A.; Lal, Sohan; Kumar, Parveen; Sah, S. K.; Vandana; Kamal

    2017-09-01

    On 21st September 2009 an earthquake of magnitude (M w 6.1) occurred in the East Bhutan. This earthquake caused serious damage to the residential area and was widely felt in the Bhutan Himalaya and its adjoining area. We estimated the source model of this earthquake using modified semi empirical technique. In the rupture plane, several locations of nucleation point have been considered and finalised based on the minimum root mean square error of waveform comparison. In the present work observed and simulated waveforms has been compared at all the eight stations. Comparison of horizontal components of actual and simulated records at these stations confirms the estimated parameters of final rupture model and efficacy of the modified semi-empirical technique (Joshi et al., Nat Hazards 64:1029-1054, 2012b) of strong ground motion simulation.

  9. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  10. Retrieval of Similar Objects in Simulation Data Using Machine Learning Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Cantu-Paz, E; Cheung, S-C; Kamath, C

    2003-06-19

    Comparing the output of a physics simulation with an experiment is often done by visually comparing the two outputs. In order to determine which simulation is a closer match to the experiment, more quantitative measures are needed. This paper describes our early experiences with this problem by considering the slightly simpler problem of finding objects in a image that are similar to a given query object. Focusing on a dataset from a fluid mixing problem, we report on our experiments using classification techniques from machine learning to retrieve the objects of interest in the simulation data. The early results reported in this paper suggest that machine learning techniques can retrieve more objects that are similar to the query than distance-based similarity methods.

  11. Advanced particle-in-cell simulation techniques for modeling the Lockheed Martin Compact Fusion Reactor

    Science.gov (United States)

    Welch, Dale; Font, Gabriel; Mitchell, Robert; Rose, David

    2017-10-01

    We report on particle-in-cell developments of the study of the Compact Fusion Reactor. Millisecond, two and three-dimensional simulations (cubic meter volume) of confinement and neutral beam heating of the magnetic confinement device requires accurate representation of the complex orbits, near perfect energy conservation, and significant computational power. In order to determine initial plasma fill and neutral beam heating, these simulations include ionization, elastic and charge exchange hydrogen reactions. To this end, we are pursuing fast electromagnetic kinetic modeling algorithms including a two implicit techniques and a hybrid quasi-neutral algorithm with kinetic ions. The kinetic modeling includes use of the Poisson-corrected direct implicit, magnetic implicit, as well as second-order cloud-in-cell techniques. The hybrid algorithm, ignoring electron inertial effects, is two orders of magnitude faster than kinetic but not as accurate with respect to confinement. The advantages and disadvantages of these techniques will be presented. Funded by Lockheed Martin.

  12. Wind Turbine Rotor Simulation via CFD Based Actuator Disc Technique Compared to Detailed Measurement

    Directory of Open Access Journals (Sweden)

    Esmail Mahmoodi

    2015-10-01

    Full Text Available In this paper, a generalized Actuator Disc (AD is used to model the wind turbine rotor of the MEXICO experiment, a collaborative European wind turbine project. The AD model as a combination of CFD technique and User Defined Functions codes (UDF, so-called UDF/AD model is used to simulate loads and performance of the rotor in three different wind speed tests. Distributed force on the blade, thrust and power production of the rotor as important designing parameters of wind turbine rotors are focused to model. A developed Blade Element Momentum (BEM theory as a code based numerical technique as well as a full rotor simulation both from the literature are included into the results to compare and discuss. The output of all techniques is compared to detailed measurements for validation, which led us to final conclusions.

  13. Validation of a novel technique for creating simulated radiographs using computed tomography datasets.

    Science.gov (United States)

    Mendoza, Patricia; d'Anjou, Marc-André; Carmel, Eric N; Fournier, Eric; Mai, Wilfried; Alexander, Kate; Winter, Matthew D; Zwingenberger, Allison L; Thrall, Donald E; Theoret, Christine

    2014-01-01

    Understanding radiographic anatomy and the effects of varying patient and radiographic tube positioning on image quality can be a challenge for students. The purposes of this study were to develop and validate a novel technique for creating simulated radiographs using computed tomography (CT) datasets. A DICOM viewer (ORS Visual) plug-in was developed with the ability to move and deform cuboidal volumetric CT datasets, and to produce images simulating the effects of tube-patient-detector distance and angulation. Computed tomographic datasets were acquired from two dogs, one cat, and one horse. Simulated radiographs of different body parts (n = 9) were produced using different angles to mimic conventional projections, before actual digital radiographs were obtained using the same projections. These studies (n = 18) were then submitted to 10 board-certified radiologists who were asked to score visualization of anatomical landmarks, depiction of patient positioning, realism of distortion/magnification, and image quality. No significant differences between simulated and actual radiographs were found for anatomic structure visualization and patient positioning in the majority of body parts. For the assessment of radiographic realism, no significant differences were found between simulated and digital radiographs for canine pelvis, equine tarsus, and feline abdomen body parts. Overall, image quality and contrast resolution of simulated radiographs were considered satisfactory. Findings from the current study indicated that radiographs simulated using this new technique are comparable to actual digital radiographs. Further studies are needed to apply this technique in developing interactive tools for teaching radiographic anatomy and the effects of varying patient and tube positioning. © 2013 American College of Veterinary Radiology.

  14. Modeling a Million-Node Slim Fly Network Using Parallel Discrete-Event Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wolfe, Noah; Carothers, Christopher; Mubarak, Misbah; Ross, Robert; Carns, Philip

    2016-05-15

    As supercomputers close in on exascale performance, the increased number of processors and processing power translates to an increased demand on the underlying network interconnect. The Slim Fly network topology, a new lowdiameter and low-latency interconnection network, is gaining interest as one possible solution for next-generation supercomputing interconnect systems. In this paper, we present a high-fidelity Slim Fly it-level model leveraging the Rensselaer Optimistic Simulation System (ROSS) and Co-Design of Exascale Storage (CODES) frameworks. We validate our Slim Fly model with the Kathareios et al. Slim Fly model results provided at moderately sized network scales. We further scale the model size up to n unprecedented 1 million compute nodes; and through visualization of network simulation metrics such as link bandwidth, packet latency, and port occupancy, we get an insight into the network behavior at the million-node scale. We also show linear strong scaling of the Slim Fly model on an Intel cluster achieving a peak event rate of 36 million events per second using 128 MPI tasks to process 7 billion events. Detailed analysis of the underlying discrete-event simulation performance shows that a million-node Slim Fly model simulation can execute in 198 seconds on the Intel cluster.

  15. COMPLEX NETWORK SIMULATION OF FOREST NETWORK SPATIAL PATTERN IN PEARL RIVER DELTA

    Directory of Open Access Journals (Sweden)

    Y. Zeng

    2017-09-01

    Full Text Available Forest network-construction uses for the method and model with the scale-free features of complex network theory based on random graph theory and dynamic network nodes which show a power-law distribution phenomenon. The model is suitable for ecological disturbance by larger ecological landscape Pearl River Delta consistent recovery. Remote sensing and GIS spatial data are available through the latest forest patches. A standard scale-free network node distribution model calculates the area of forest network’s power-law distribution parameter value size; The recent existing forest polygons which are defined as nodes can compute the network nodes decaying index value of the network’s degree distribution. The parameters of forest network are picked up then make a spatial transition to GIS real world models. Hence the connection is automatically generated by minimizing the ecological corridor by the least cost rule between the near nodes. Based on scale-free network node distribution requirements, select the number compared with less, a huge point of aggregation as a future forest planning network’s main node, and put them with the existing node sequence comparison. By this theory, the forest ecological projects in the past avoid being fragmented, scattered disorderly phenomena. The previous regular forest networks can be reduced the required forest planting costs by this method. For ecological restoration of tropical and subtropical in south China areas, it will provide an effective method for the forest entering city project guidance and demonstration with other ecological networks (water, climate network, etc. for networking a standard and base datum.

  16. Multiple Linear Regression Model Based on Neural Network and Its Application in the MBR Simulation

    Directory of Open Access Journals (Sweden)

    Chunqing Li

    2012-01-01

    Full Text Available The computer simulation of the membrane bioreactor MBR has become the research focus of the MBR simulation. In order to compensate for the defects, for example, long test period, high cost, invisible equipment seal, and so forth, on the basis of conducting in-depth study of the mathematical model of the MBR, combining with neural network theory, this paper proposed a three-dimensional simulation system for MBR wastewater treatment, with fast speed, high efficiency, and good visualization. The system is researched and developed with the hybrid programming of VC++ programming language and OpenGL, with a multifactor linear regression model of affecting MBR membrane fluxes based on neural network, applying modeling method of integer instead of float and quad tree recursion. The experiments show that the three-dimensional simulation system, using the above models and methods, has the inspiration and reference for the future research and application of the MBR simulation technology.

  17. A case for spiking neural network simulation based on configurable multiple-FPGA systems.

    Science.gov (United States)

    Yang, Shufan; Wu, Qiang; Li, Renfa

    2011-09-01

    Recent neuropsychological research has begun to reveal that neurons encode information in the timing of spikes. Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Simulation of the spiking neural networks in software is unable to rapidly generate output spikes in large-scale of neural network. An alternative approach, hardware implementation of such system, provides the possibility to generate independent spikes precisely and simultaneously output spike waves in real time, under the premise that spiking neural network can take full advantage of hardware inherent parallelism. We introduce a configurable FPGA-oriented hardware platform for spiking neural network simulation in this work. We aim to use this platform to combine the speed of dedicated hardware with the programmability of software so that it might allow neuroscientists to put together sophisticated computation experiments of their own model. A feed-forward hierarchy network is developed as a case study to describe the operation of biological neural systems (such as orientation selectivity of visual cortex) and computational models of such systems. This model demonstrates how a feed-forward neural network constructs the circuitry required for orientation selectivity and provides platform for reaching a deeper understanding of the primate visual system. In the future, larger scale models based on this framework can be used to replicate the actual architecture in visual cortex, leading to more detailed predictions and insights into visual perception phenomenon.

  18. Spectral element filtering techniques for large eddy simulation with dynamic estimation

    CERN Document Server

    Blackburn, H M

    2003-01-01

    Spectral element methods have previously been successfully applied to direct numerical simulation of turbulent flows with moderate geometrical complexity and low to moderate Reynolds numbers. A natural extension of application is to large eddy simulation of turbulent flows, although there has been little published work in this area. One of the obstacles to such application is the ability to deal successfully with turbulence modelling in the presence of solid walls in arbitrary locations. An appropriate tool with which to tackle the problem is dynamic estimation of turbulence model parameters, but while this has been successfully applied to simulation of turbulent wall-bounded flows, typically in the context of spectral and finite volume methods, there have been no published applications with spectral element methods. Here, we describe approaches based on element-level spectral filtering, couple these with the dynamic procedure, and apply the techniques to large eddy simulation of a prototype wall-bounded turb...

  19. Simulating dynamic plastic continuous neural networks by finite elements.

    Science.gov (United States)

    Joghataie, Abdolreza; Torghabehi, Omid Oliyan

    2014-08-01

    We introduce dynamic plastic continuous neural network (DPCNN), which is comprised of neurons distributed in a nonlinear plastic medium where wire-like connections of neural networks are replaced with the continuous medium. We use finite element method to model the dynamic phenomenon of information processing within the DPCNNs. During the training, instead of weights, the properties of the continuous material at its different locations and some properties of neurons are modified. Input and output can be vectors and/or continuous functions over lines and/or areas. Delay and feedback from neurons to themselves and from outputs occur in the DPCNNs. We model a simple form of the DPCNN where the medium is a rectangular plate of bilinear material, and the neurons continuously fire a signal, which is a function of the horizontal displacement.

  20. Runtime Performance and Virtual Network Control Alternatives in VM-Based High-Fidelity Network Simulations

    Science.gov (United States)

    2012-12-01

    network emulation systems have been proposed, such as V-eM (Apostolopoulos and Hasapis 2006), DieCast (Gupta et al. 2008), VENICE (Liu, Raju, and...Proceedings of the 2006 3rd Symposium on Networked Systems Design and Implementation (NSDI’06), San Jose, CA, USA. Gupta, D., et al. 2008. “ DieCast